1 / 12

iWARP Ethernet: Leader in Next-Generation Ethernet

Learn about the industry-leading multiple connection throughput and low latency of iWARP Ethernet. Discover the proof points that show its dominance in multi-processor/multi-core systems. See how iWARP achieves the same results as InfiniBand in production applications.

Download Presentation

iWARP Ethernet: Leader in Next-Generation Ethernet

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Leader in Next Generation Ethernet

  2. Outline • Where is iWARP Today? • Some Proof Points • Conclusion • Questions

  3. Latency in Multi-Processor/Multi-Core Systems

  4. Industry Leading Multiple Connection Throughput Predominant RDMA Command for; Intel MPI, MVAPICH2, HP MPI, Scali MPI, OpenMPI, MPICH2 and iSER

  5. E1 E6 E2 E3 E5 E4 In Production Applications Now… 10Gb iWARP Ethernet achieves same results as InfiniBand Abaqus/Explicit: finite element analysis test suite (E1-E6) Infiniband NetEffect 10Gb

  6. Unmodified Automatically scaled across nodes Wall-clock times equal to or better than 20-Gbps InfiniBand In Production Applications Now… 10Gb iWARP Ethernet achieves same results as InfiniBand PAM-CRASH crash simulation DDR Infiniband (20Gbps) NetEffect NE02010GbE

  7. Fluent ‘rating’: the number of benchmarks that can be run (in sequence) in a 24 hour period. A higher rating means faster performance. Fluent rating In Production Applications Now… 10Gb iWARP Ethernet achieves same results as InfiniBand Fluent FL5Lx Benchmarks: Computational Fluid Dynamics (CFD) FL5L1 FL5L2 FL5L3

  8. Landmark Nexus reservoir simulation (dominated by latency and message rate) Landmark Nexus Performance Results 16 CoresStandard 1GbE2:58:41 2:52:48 2:52:48 2:24:00 2:24:00 8 CoresStandard 1GbE2:27:17 1:55:12 1:55:12 wall-clock time (h:mm:ss) wall-clock time (h:mm:ss) 1:26:24 1:26:24 NetEffect 1GbE adapter 0:57:36 0:57:36 10Gb Infiniband NetEffect 10GbE adapter 0:28:48 0:28:48 0:00:00 0:00:00 2-2 2-2 4-2 4-2 4-4 4-4 8-4 8-4 8-8 8-8 16-8 16-8 32-8 32-8 system configuration (processors-nodes) system configuration (processors-nodes) In Production Applications Now… Even 1Gb iWARP Ethernet achieves results on par with InfiniBand in some applications

  9. Conclusion • We joined forces 2.5 years ago to make the concept of a single RDMA API work on multiple fabrics from multiple vendors and be interoperable • InfiniBand & Ethernet Working Together • Multiple IB Vendors • Multiple iWARP Vendors • It is now a Reality!

  10. iWARP has moved “from benchmarks to applications”

  11. Questions

  12. Thank You! www.neteffect.com

More Related