1 / 24

Interconnect Your Future

Interconnect Your Future. Gilad Shainer, VP of Marketing. Dec 2013. Leading Supplier of End-to-End Interconnect Solutions . Comprehensive End-to-End Software Accelerators and Managment. Management. Storage and Data. MXM Mellanox Messaging Acceleration. FCA

butch
Download Presentation

Interconnect Your Future

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Interconnect Your Future Gilad Shainer, VP of Marketing Dec 2013

  2. Leading Supplier of End-to-End Interconnect Solutions Comprehensive End-to-End Software Accelerators and Managment Management Storage and Data MXM Mellanox Messaging Acceleration • FCA • Fabric Collectives Acceleration • UFM • Unified Fabric Management VSA Storage Accelerator (iSCSI) • UDA • Unstructured Data Accelerator Comprehensive End-to-End InfiniBand and Ethernet Portfolio ICs Adapter Cards Switches/Gateways Host/Fabric Software Metro / WAN Cables/Modules

  3. Mellanox InfiniBand Paves the Road to Exascale Computing Accelerating Half of the World’s Petascale Systems Mellanox Connected Petascale System Examples

  4. TOP500 InfiniBand Accelerated Petascale Capable Machines • Mellanox FDR InfiniBand systems Tripled from Nov’12 to Nov’13 • Accelerating 63% of the Petaflop capable systems (12 systems out of 19)

  5. FDR InfiniBand Delivers Highest Return on Investment Higher is better Higher is better Higher is better Higher is better Source: HPC Advisory Council

  6. Connect-IB Architectural Foundation for Exascale Computing

  7. Mellanox Connect-IB The World’s Fastest Adapter • The 7th generation of Mellanox interconnect adapters • World’s first 100Gb/s interconnect adapter (dual-port FDR 56Gb/s InfiniBand) • Delivers 137 million messages per second – 4X higher than competition • Support the new innovative InfiniBand scalable transport – Dynamically Connected

  8. Connect-IB Provides Highest Interconnect Throughput Higher is Better Connect-IB FDR (Dual port) Connect-IB FDR (Dual port) ConnectX-3 FDR ConnectX-3 FDR ConnectX-2 QDR ConnectX-2 QDR Competition (InfiniBand) Competition (InfiniBand) Source: Prof. DK Panda Gain Your Performance Leadership With Connect-IB Adapters

  9. Connect-IB Delivers Highest Application Performance 200% Higher Performance Versus Competition, with Only 32-nodes Performance Gap Increases with Cluster Size

  10. Dynamically Connected Transport Advantages

  11. Mellanox ScalableHPC Accelerations for Parallel Programs

  12. Mellanox ScalableHPC Accelerate Parallel Applications MPI SHMEM PGAS P1 P2 P1 P2 P3 P1 P2 P3 P3 Logical Shared Memory Logical Shared Memory Memory Memory Memory Memory Memory Memory Memory • FCA • Topology Aware Collective Optimization • Hardware Multicast • Separate Virtual Fabric for Collectives • CORE-Direct Hardware Offload • MXM • Reliable Messaging Optimized for Mellanox HCA • Hybrid Transport Mechanism • Efficient Memory Registration • Receive Side Tag Matching InfiniBand Verbs API

  13. Mellanox FCA Collective Scalability

  14. Nonblocking Alltoall (Overlap-Wait) Benchmark CoreDirect Offload allows Alltoall benchmark with almost 100% compute

  15. Accelerator and GPU Offloads

  16. GPUDirect 1.0 Transmit Receive Non GPUDirect 2 2 System Memory System Memory System Memory System Memory 1 1 CPU CPU CPU CPU 1 1 Chip set Chip set Chip set Chip set GPU GPU GPU GPU InfiniBand InfiniBand InfiniBand InfiniBand GPU Memory GPU Memory GPU Memory GPU Memory GPUDirect 1.0

  17. GPUDirect RDMA Transmit Receive GPUDirect 1.0 System Memory System Memory System Memory System Memory 1 1 1 1 CPU CPU CPU CPU Chip set Chip set Chip set Chip set GPU GPU GPU GPU InfiniBand InfiniBand InfiniBand InfiniBand GPU Memory GPU Memory GPU Memory GPU Memory GPUDirect RDMA

  18. Performance of MVAPICH2 with GPUDirect RDMA Higher is Better Lower is Better GPU-GPU Internode MPI Latency GPU-GPU Internode MPI Bandwidth 5X 67 % 5.49 usec Source: Prof. DK Panda 67% Lower Latency 5X Increase in Throughput

  19. Performance of MVAPICH2 with GPU-Direct-RDMA Execution Time of HSG (Heisenberg Spin Glass) Application with 2 GPU Nodes Source: Prof. DK Panda Problem Size

  20. Roadmap

  21. Technology Roadmap – One-Generation Lead over the Competition Mellanox 200Gbs 100Gbs 56Gbs 40Gbs 20Gbs Terascale Petascale Exascale 3rd 1st TOP500 2003 Virginia Tech (Apple) “Roadrunner” Mellanox Connected Mega Supercomputers 2000 2005 2010 2015 2020

  22. Paving The Road for 100Gb/s and Beyond Recent Acquisitions are Part of Mellanox’s Strategy to Make 100Gb/s Deployments as Easy as 10Gb/s Copper (Passive, Active) Optical Cables (VCSEL) Silicon Photonics

  23. The Only Provider of End-to-End 40/56Gb/s Solutions Comprehensive End-to-End InfiniBand and Ethernet Portfolio ICs Adapter Cards Switches/Gateways Host/Fabric Software Metro / WAN Cables/Modules From Data Center to Metro and WAN X86, ARM and Power based Compute and Storage Platforms The Interconnect Provider For 10Gb/s and Beyond

  24. Thank You

More Related