1 / 32

HP Unified Cluster Portfolio with Cisco InfiniBand

R. Kent Koeninger HPCD Product and Technology Marketing May 2007. HP Unified Cluster Portfolio with Cisco InfiniBand. HP Delivers Complete HPC Solutions. Innovation based on standards Broadest choice of customer-focused HPC solutions Affordable, accessible, supercomputing performance.

pericson
Download Presentation

HP Unified Cluster Portfolio with Cisco InfiniBand

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. R. Kent Koeninger HPCD Product and Technology MarketingMay 2007 HP Unified Cluster Portfolio with Cisco InfiniBand

  2. HP Delivers Complete HPC Solutions • Innovation based on standards • Broadest choice of customer-focused HPC solutions • Affordable, accessible, supercomputing performance HP Confidential Disclosure Agreement (CDA) Required

  3. HPC Cluster Interconnects Requirements • Make it go fast • Highly reliable, but usually not HA • Low price • Spend the budget on compute services • High scalability and throughput • Connect many servers in clusters, farms, and grids • Support efficient distributed-parallel execution • High bandwidth, low latency, high message rate, low overhead • High speed parallel-scalable filesystem connectivity • Compatible with existing applications • Portability lowers TCO • Easy to deploy, use, and upgrade • Best initial and long-term TCO

  4. Application MPI-1 (built shared) MPICH Compatible MPI-1 Linux Itanium Linux x86 XC V2.0 XC Clusters HP-MPI: ISV Preferred for Performance and Transport & Object Compatibility Object CompatibilityAcross MPI Versions Gb Ethernet Object compatibilityacross transports HP-MPI V2.1 MPI-1 MPI-2 MPICH V1.2.5 MPI-1 Multiple OS & cluster compatibility HP-MPI V2.1 and later is object compatible with MPICH V1.2.5 and later HP Confidential Disclosure Agreement (CDA) Required

  5. HP Unified Cluster Portfolio Interconnects:Fast communication for fast computation • UCP integrated portfolio of cluster interconnects • High performance and high scalability • Low price • Industry leading and industry standard • 1 Gbps Ethernet (GbE and dual-GbE) • Inexpensive and sufficient for many HPC computational clusters • InfiniBand (10 Gbps and 20 Gbps) • Enables high scalability for distributed-parallel applications • Delivers lower latency and higher packet rates for MPI, sockets, … • Other interconnect solutions available by request New: UCP with Cisco InfiniBand

  6. Faster Interconnects Faster Clusters High performance: bandwidth, latency & packet rate Maximum parallel application performance and scalability Best price, Best performance, Fastest time to market Portfolio of industry-standard interconnects Open, industry-standard software interfaces Application compatibility and portability Highest performance and highest parallel-programming scalability HP-MPI and sockets on faster, lower-level, software interfaces

  7. Ethernet HP Unified Cluster PortfolioHigh Performance Interconnects • Low cost Ethernet interconnects • Sufficient for the majority of HPC clusters • Higher performance InfiniBand interconnects • For demanding distributed-parallel message-passing (MPI) • GigE • 60-80MB/s, >40 μSec MPI latency • InfiniBand • IB 4x DDR – speeds 2.4-2.6 GB/s • 3-4 μSec MPI latency

  8. Ethernet Gigabit Ethernet • Low price • Sufficient for a majority of HPC clusters • Good for many independent processes • Also OK for low-scale message-passing parallel-distributed codes • Good access to storage • 100 MB/s per server • Sufficient for low core counts (too slow for 16 core servers?) • Dual and quad GbE for Enterprise uses • InfiniBand can often under price and outperform bonded GbE • 10GbE for backbone (switch to switch) use

  9. 2 Nodes 4 Nodes 8 Nodes Why Interconnects Matter? Fluent Perf Study – IB versus GigE 3.6M call model on 1 to 16 cores Almost linear speedupwith IB GigE does not scale beyond small clusters HP Confidential Disclosure Agreement (CDA) Required

  10. What is InfiniBand? • Industry standard switched fabric • Performance • High bandwidth • 20Gb/s each-direction w/ 4X DDR (double data rate link) • Future, 40Gb/s QDR is expected in 2008 timeframe • Very low latency • 3 ~ 4usec MPI Ping-pong w/ Mellanox technology • Very low CPU usage during message-passing • Enables computation and message-passing overlap • Scalability • Thousands of nodes • Ease of clustering • Self-discovery of nodes • Plug and play HP Confidential Disclosure Agreement (CDA) Required

  11. InfiniBand Transport Offload • Eliminated the systems bottlenecks with RDMA • Kernel bypass • Handling protocol transport in the adapter • Zero-copy operations HP Confidential Disclosure Agreement (CDA) Required

  12. 10GigE RDMA RNICs shipping mid 2007 • Near 1 GB/s with under 10 μSec MPI latency 10 Gb Ethernet with RDMA (RNICs) • High bandwidth, low latency Ethernet • Ethernet compatibility with HPC performance • 10GbE expected to sell well in Enterprise markets • Available 2007, ramping 2008, stride 2009 • HP BladeSystem c-Class 10GbE switching • 40 Gbps per switch module • 4 external 10 GbE ports and 16 internal 1GbE ports in switch module • 100 Gbps per switch module • 10 external and 10 internal 10GbE ports in switch module • Two modules for 10 GbE per blade across 16 blades • 10GbE not expected to ramp in HPC markets • IB has lower price, higher bandwidth, better latency, faster message rates, higher cluster scalability • Growing demand for IB to10 GbE gateways (6 to 9 months away)

  13. Cisco Fabric Manager BladeSystem c-Class HP DDR 24-portc-Class switch module HP 4X DDR single-portc-Class mezzanine card Cisco IB drivers & FM Cisco InfiniBand in HP Cluster Platform and BladeSystem c-Class Clusters Cisco SFS 7000D Switch Series24, 144 & 288 port DDR SFS7024DSFS7012DSFS7000D HP brandedMellanox 4X DDR PCIe HCAs(DL and BL)withCisco Driversfor c-Class

  14. Cisco InfiniBand Software Stacksfor BladeSystem c-Class Clusters(Cisco drivers not sold by HP on rack-mount servers) • OFED (Open Fabrics Enterprise Edition) • Compatible with XC 3.2 and HP SFS V2.2-1 • Cisco InfiniBand drivers for BladeSystem c-Class clusters • 3 versions: Commercial Linux, OFED Linux & Windows • Windows not yet available • Recommended for c-Class customers who prefer Cisco’s networking ecosystem

  15. UCP Linux Cisco-InfiniBand CY2Q2007 (May)Options to sell and ship with CP support * In CP but not supported by XC or HP SFS HP Confidential Disclosure Agreement (CDA) Required

  16. Cisco HPC InfiniBand Solution Building Blocks Embedded and Hosted Subnet Manager CiscoWorks & Virtualization Software Packages Embedded System and Fabric management Gateway Modules - IB to Ethernet • IB to Fibre Channel(Cisco reference parts: not parts from HP) Linux Drivers for c-ClassHost Channel Adapter Cisco InfiniBand DDR Server Fabric Switches • MPI • IPoIB • SDP • … Cisco Commercial and OFED (Windows TBD) HP Confidential Disclosure Agreement (CDA) Required

  17. Cisco SFS 7000D • 24 dual-speed InfiniBand 4X ports • 20-Gbps double data rate (DDR) • 10-Gbps single data rate (SDR) • Non-blocking 480-Gbps cross-sectional bandwidth • Port-to-Port latency less than 200 nanoseconds • Embedded subnet manager • Std Cisco CLI, Web, and Java-based systems management options • Powered ports for flexible copper and optical interfaces • Cisco specified and exclusive belly-to-belly IB connectors; DDR signals reach user ports without traversing mezzanine connector.

  18. Cisco Large Switch DDR Family SFS7024D SFS7012D 7U Modular Chassis Type 14U Modular 144 ports 288 ports Max 4X ports DDR DDR Link Speed 12 Side by Side Slots 12 by 4X LIMs Powered Interfaces 24 Side by Side Slots 12 by 4X LIMs Powered Interfaces Port Module Options Redundant Power/Cooling Redundant Management Hot Swappable FRUs Redundant Power/Cooling Redundant Management Hot Swappable FRUs High Availability External External Subnet Manager • 145-288 node clusters • Core switch for 1,536+ node clusters • 97-144 node clusters Best Use HP Confidential Disclosure Agreement (CDA) Required

  19. Cisco InfiniBand inc-Class BladeSystem Clusters

  20. BladeSystem - Cisco Infiniband Solution Cisco SFS 7000D InfiniBand Switches& Fabric Managers Cisco InfiniBand Host BasedSoftware Drivers InfiniBand Mellanox HCA and Switch Module HP c-Class BladeSystem HP Confidential Disclosure Agreement (CDA) Required

  21. Small configuration example with c-Class c7000 c7000 16 BL460c with one HCA each 16 16 DDR IB Switch Module 8 8 With OFED drivers: Requires at least one SFS7000D (24 port) switch to run the Cisco Fabric Manager Up to 32 nodes cluster configuration (2 switch hops) Note: other Ethernet networks are not drawn in this diagram HP Confidential Disclosure Agreement (CDA) Required

  22. Single rack example with c-Class 1 2 3 c7000 c7000 c7000 16 16 16 4 4 4 4 Leaf-level SW 4 4 SFS7000D24-port DDR IB switch SFS700D24-port DDR IB switch Up to 48 nodes cluster configuration Subnet manager runs on 24-port switch Fabric redundancy Max switch hops: 3 Spine-level SW Note: Other Ethernet networks are not drawn in this diagram. HP Confidential Disclosure Agreement (CDA) Required

  23. Multi-rack configuration example with c-ClassUse 24-port SFS700D switches for up to 384 IB-port configurationsUse larger SFS7012D and SFS7024D switches for larger c-Class clusters 1 2 … 16 c7000 c7000 c7000 … 16 16 16 1 1 1 1 1 1 24-port DDR IB SW 24-port DDR IB SW 24-port DDR IB SW * * * 256 nodes cluster configuration w/ 8 24-port switches Subnet manager runs on switch Fabric redundancy Max switch hops: 3 Note: other Ethernet networks are not drawn in this diagram HP Confidential Disclosure Agreement (CDA) Required

  24. Scaling clusters with larger switch 1 2 … 32 c7000 c7000 c7000 … 16 16 16 8 8 8 8x36=288 ports SFS7024D 288 port switch 512-node cluster configuration with single 288-port switch (up to 5 switch hops) Requires at least one SFS7000D (24 port) switch to run the Cisco Fabric Manager HP Confidential Disclosure Agreement (CDA) Required

  25. top-level switches (288 ports) Connects to 12 nodes node-level switches (24 ports) Expected HP Unified Cluster PortfolioInterconnect Improvements(not necessarily yet plan of record) • Higher bandwidth, lower latency Ethernet • Ethernet compatibility with HPC performance for MPI • InfiniBand performance improvements • Continuing the lead in performance: 20 Gbps  40 Gbps • 10GigE RDMA in mid 2007 • Near 1 GB/s with under 10 μSec MPI latency • InfiniBand • Lower latency HCAs in 2007 (under 2 μSecs) • PCI Express 2.0 (DDR) in 2008 • Quad data rate (QDR) in 2008/2009 (over 3 GB/s)

  26. Cisco IB Value

  27. Cisco HPC Market Leadership-Top 500 List • 7 of Top 10 systems use Cisco networking • Top 3 largest InfiniBand Clusters running Cisco InfiniBand** • InfiniBand is growing with 22% attachment • InfiniBand is based on open standards w/Cisco as active contributor • Ethernet is predominant interconnect with 60% attachment • Myrinet, Quadrics are dropping down based on proprietary technologies Nov 2006 Top 500 List • 359 Clusters in Top 500 211 Gigabit Ethernet 79 InfiniBand 51 Myrinet 13 Quadrics *Growth numbers shown here reflect changes in Top 500 systems, they don’t necessarily reflect new purchases ** Size of cluster is measured by Number of Nodes and not Teraflops, Sandia National Labs #6, Maui HPCC #11 & TACC #12

  28. Cisco InfiniBand Differentiators • Most Complete DDR Switching Line • - Cisco is shipping fixed and modular DDR switches with best in class bit error rates (10-15) • Scalable InfiniBand Subnet Manager • - Proven scalability of 4600 nodes, Sandia Labs • - High availability across multiple SM’s with full database synchronization • - Rapid fabric discovery and fabric “bring up”.. Less than 60 seconds for 4000 nodes • - Optimized routing for SDR/DDR mixed fabrics • InfiniBand Host Drivers • Enterprise Class Mgmt and Security • SNMP V3 for capturing chassis failures, performance counters, fabric topology, logging over SNMP • SNMP offers integration CiscoWorks and HP OpenView! • Imaging upgrading FTP, TFTP, SCP • Radius and TACACS+, integrated with Cisco ACS • High Performance I/O • - High Performance and Highly Available Ethernet Gateway for Client, NAS & Parallel File System Access • - SRP to Fibre Channel Gateway for SAN Access • - Remote server boot for virtualization

  29. Cisco SFS InfiniBand Switch Differentiators

  30. Linux Commercial Stack - Differentiators

  31. Cisco Components of OFED1.1 Cisco is primary SQA tester of OFED

  32. Thank You!

More Related