1 / 26

Paving The Road to Exascale Computing

Paving The Road to Exascale Computing. Highest Performing, Most Efficient End-to-End Connectivity for Servers and Storage. Peter Waxman VP of HPC Sales April 2011 HPC@mellanox.com. Company Overview. Leading connectivity solutions provider for data center servers and storage systems

korbin
Download Presentation

Paving The Road to Exascale Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Paving The Road to Exascale Computing Highest Performing, Most EfficientEnd-to-End Connectivity for Servers and Storage Peter Waxman VP of HPC Sales April 2011 HPC@mellanox.com

  2. Company Overview Leading connectivity solutions provider for data center servers and storage systems Foundation for the world’s most powerful and energy-efficient systems >7.0M ports shipped as of Dec.’10 Company headquarters: Yokneam, Israel; Sunnyvale, California ~700 employees; worldwide sales & support Solid financial position Record Revenue in FY’10; $154.6M Q4’10 revenue = $40.7M Completed acquisition of Voltaire, Ltd. Ticker: MLNX Recent Awards

  3. Connectivity Solutions for Efficient Computing Enterprise HPC High-end HPC HPC Clouds Mellanox Interconnect Networking Solutions Host/FabricSoftware ICs Adapter Cards Switches/Gateways Cables Leading Connectivity Solution Provider For Servers and Storage

  4. Combining Best-in-Class Systems Knowledge and Software with Best-in-Class Silicon + • Mellanox Brings • InfiniBand and 10GbE Silicon Technology & Roadmap • Adapter Leadership • Advanced HW Features • End to End Experience • Strong OEM Engagements • Voltaire Brings • InfiniBand and 10GbE Switch Systems Experience • IB Switch Market Share Leadership • End to End SW & Systems Solutions • Strong Enterprise Customer Engagements • Combined Entity • Silicon, Adapters and Systems Leadership • IB Market Share Leadership • Full Service Offering • Strong Customer and OEMs Engagements • InfiniScale & ConnectX • HCA and Switch Silicon • HCA Adapters, FIT • Scalable Switch Systems • Dell, HP, IBM, and Oracle • Grid Directors & Software • UFM Fabric Management SW • Applications Acceleration SW • Enterprise Class Switches • HP, IBM • InfiniBand Market Leader • End to End Silicon, Systems, Software Solutions • FDR/EDR Roadmap • Application Acceleration and Fabric Management Software • Full OEM Coverage InfiniBand + = • 10GbE Vantage Switches & SW • UFM Fabric Management SW • Applications Acceleration SW • 24, 48, 288 Port 10GbE Switches • HP, IBM • 10GbE and 40GbE Adapters • Highest performance Ethernet Silicon • 10GbE LOM and Mezz Adapters at Dell, HP and IBM • Ethernet Innovator • End to End Silicon, Systems, Software Solutions • 10GbE, 40GbE and 100GbE Roadmap • Application Acceleration and Fabric Management Software • Strong OEM Coverage Ethernet/VPI + =

  5. Connecting the Data Center Ecosystem Hardware OEMs Software Partners End Users Enterprise Data Centers Servers High-Performance Computing Storage Embedded Embedded

  6. Adapter market and performance leadership First to market with 40Gb/s (QDR) adapters Roadmap to end-to-end 56Gb/s (FDR) in 2011 Delivers next-gen application efficiency capabilities Global Tier-1 server and storage availability Bull, Dawning, Dell, Fujitsu, HP, IBM, Oracle, SGI, T-Platforms Comprehensive, performance-leading switch family Industry’s highest density and scalability World’s lowest port-to-port latency (25-50% lower than competitors) Comprehensive and feature-rich management/acceleration software Enhancing application performance and network ease-of-use High-performance converged I/O gateways Optimal scaling, consolidation, energy efficiency Lowers space and power and increases application performance Copper and Fiber Cables Exceeds IBTA mechanical & electrical standards Ultimate reliability and signal integrity Most Complete End-to-End InfiniBand Solutions

  7. Expanding End-to-End Ethernet Leadership Industry’s highest performing Ethernet NIC 10/40GigE w/FCoE with hardware offload Ethernet industry’s lowest1.3μs end-to-end latency Faster application completion, better server utilization Tremendous ecosystem support momentum Multiple Tier-1 OEM design wins (Dell, IBM, HP) Servers, LAN on Motherboard (LOM), and storage systems Comprehensive OS Support VMware, Citrix, Windows, Linux High capacity, low latency 10GigE switches 24 to 288 ports with 600-1200ns latency Sold through multiple Tier-1 OEMs (IBM, HP) Consolidation over shared fabrics Integrated, complete management offering Service Oriented Infrastructure Management, with Open APIs

  8. Mellanox in the TOP500 Mellanox InfiniBand builds the most powerful clusters Connects 4 out of the Top 10 and 61 systems in the Top 100 InfiniBand represents 43% of the TOP500 98% of the InfiniBand clusters use Mellanox solutions Mellanox InfiniBand enables the highest utilization on the TOP500 Up to 96% system utilization Mellanox 10GigE is the highest ranked 10GigE system (#126) Top500 InfiniBand Trends 250 215 200 182 150 Number of Clusters 142 100 50 0 Nov 08 Nov 09 Nov 10 8

  9. Mellanox Accelerations for Scalable HPC GPUDirect 10s-100s% Boost 30+% Boost • Scalable Offloading for MPI/SHMEM • Accelerating GPU Communications 80+% Boost Maximizing Network Utilization Through Routing & Management (3D-Torus, Fat-Tree) • Highest Throughput and Scalability • (Paving to Road to Exascale Computing)

  10. Software Accelerators Highest Performance MPI Performance iSCSI Storage Messaging Latency

  11. UFM Fabric Management • Provides Deep Visibility • Real-time and historical monitoring of fabric health and performance • Central fabric dashboard • Unique fabric-wide congestion map • Optimizes performance • Quality of Service • Traffic Aware Routing Algorithm (TARA) • Multicast routing optimization • Eliminates Complexity • One pane of glass to monitor and configure fabrics of thousand of nodes • Enable advanced features like segmentation and QoS by automating provisioning • Abstract the physical layer into logical entities such as jobs and resource groups • Maximizes Fabric Utilization • Threshold based alerts to quickly identify fabric faults • Performance optimization for maximum link utilization • Open architecture for integration with other tools in-context actions and fabric database

  12. LLNL Hyperion Cluster • 1152 nodes, dedicated cluster for development testing • Open Environment • CPUs: mix of Intel 4-core Xeon L5420 and 4-core Xeon E5530 • Mellanox InfiniBand QDR switches and adapters

  13. Mellanox MPI Optimizations – MPI Natural Ring

  14. Mellanox MPI Optimization – MPI Random Ring

  15. Mellanox MPI Optimization – Highest Scalability at LLNL • Mellanox MPI optimization enable linear strong scaling for LLNL application • World Leading Performance and Scalability

  16. Summary • Performance: Lowest latency, highest throughput , highest message rate • Scalability: highest applications scalability through network accelerations • Reliability: from silicon to system, highest signal/data integrity • Efficiency: highest CPU/GPU availability through advanced offloading Academic Research Computational Aided Engineering Bioscience Oil and Gas Weather Financial Cloud & Web 2.0 Clustered Database Mellanox Connectivity Solutions Financial

  17. Software Accelerators Highest Performance MPI Performance iSCSI Storage Messaging Latency

  18. Thank YouHPC@mellanox.com

  19. Mellanox Scalable InfiniBand Solutions • Mellanox InfiniBand solutions are Petascale-proven • Connecting 4 of 7 WW Petascale systems • Delivering highest scalability, performance, robustness • Advanced offloading/acceleration capabilities for MPI/SHMEM • Efficiency, congestion-free networking solutions • Mellanox InfiniBand solutions enable flexible HPC • Complete hardware offloads – transport, MPI • Allows CPU interventions and PIO transactions • Latency: ~1us ping pong; Bandwidth: 40Gb/s with QDR, 56Gb/s with FDR per port • Delivering advanced HPC technologies and solutions • Fabric Collectives Acceleration (FCA) MPI/SHMEM collectives offload • GPUDirect for GPU accelerations • Congestion control and adaptive routing • Mellanox MPI optimizations • Optimize and accelerate the InfiniBand channel interface • Optimize resource management and resource utilization (HW, SW)

  20. Mellanox Advanced InfiniBand Solutions Host/Fabric Software Management • UFM, Mellanox-OS • Integration with job schedulers • Inbox Drivers • Collectives Accelerations (FCA/CORE-Direct) • GPU Accelerations (GPUDirect) • MPI/SHMEM • RDMA • Quality of Service Application Accelerations • Adaptive Routing • Congestion Management • Traffic aware Routing (TARA) Networking Efficiency/Scalability Server and Storage High-Speed Connectivity • CPU Utilization • Message rate • Latency • Bandwidth

  21. Scalable Performance

  22. LLNL Hyperion Cluster • 1152 nodes, dedicated cluster for development testing • Open Environment • CPUs: mix of Intel 4-core Xeon L5420 and 4-core Xeon E5530 • Mellanox InfiniBand QDR switches and adapters

  23. Mellanox MPI Optimizations – MPI Natural Ring

  24. Mellanox MPI Optimization – MPI Random Ring

  25. Mellanox MPI Optimization – Highest Scalability at LLNL • Mellanox MPI optimization enable linear strong scaling for LLNL application • World Leading Performance and Scalability

  26. Leading End-to-End Connectivity Solution Provider for Servers and Storage Systems Storage Front / Back-End Switch / Gateway Server / Compute Virtual Protocol Interconnect Virtual Protocol Interconnect 40G IB & FCoIB 40G InfiniBand 10/40GigE 10/40GigE & FCoE Fibre Channel Industries Only End-to-End InfiniBand and Ethernet Portfolio Host/FabricSoftware ICs Adapter Cards Switches/Gateways Cables

More Related