Mellanox InfiniBand Interconnect The Fabric of Choice for Clustering and Storage - PowerPoint PPT Presentation

Mellanox infiniband interconnect the fabric of choice for clustering and storage
Download
1 / 10

  • 62 Views
  • Uploaded on
  • Presentation posted in: General

Mellanox InfiniBand Interconnect The Fabric of Choice for Clustering and Storage. September 2008. Gilad Shainer – Director of Technical Marketing. Company Overview. Silicon-based server and storage interconnect products R&D, Operations in Israel; Business in California

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.

Download Presentation

Mellanox InfiniBand Interconnect The Fabric of Choice for Clustering and Storage

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Mellanox infiniband interconnect the fabric of choice for clustering and storage

Mellanox InfiniBand InterconnectThe Fabric of Choice for Clustering and Storage

September 2008

Gilad Shainer – Director of Technical Marketing


Company overview

Company Overview

  • Silicon-based server and storage interconnect products

    • R&D, Operations in Israel; Business in California

    • Four generations of products since 1999

    • 250+ employees; worldwide sales & support

  • InfiniBand and Ethernet leadership

    • Foundation for the world’s most powerful computer

    • 3.7M 10/20/40Gb/s ports shipped as of Jun08

    • Proven execution, high-volume manufacturing & quality

  • Solid financial position

    • FY’07 $84.1M, 73% growth from FY’06

    • Record Revenue in 2Q’08, $28.2M

    • 1H’08 $53.4M, 3Q’08 est. $28.5M-$29M

  • Tier-one, diversified customer base

    • Includes Cisco, Dawning, Dell, Fujitsu, Fujitsu-Siemens, HP, IBM, NEC, NetApp, QLogic, SGI, Sun, Supermicro, Voltaire

$106M raised in IPO Feb07Ticker MLNX


Infiniband end to end products

High Throughput - 40Gb/s

Low latency - 1us

Low CPU overhead

Kernel bypass

Remote DMA (RDMA)

Reliability

InfiniBand End-to-End Products

Adapter ICs & Cards

Adapter ICs & Cards

End-to-End Validation

Switch ICs

Software

Cables

ADAPTER

Cables

ADAPTER

SWITCH

Blade/Rack Servers

Storage

Switch

Maximum Productivity


Virtual protocol interconnect

Virtual Protocol Interconnect

App1

App2

App3

App4

AppX

Applications

Consolidated Application Programming Interface

Networking

TCP/IP/UDP

Sockets

Storage

NFS, CIFS, iSCSINFS-RDMA, SRP, iSER,

Fibre Channel, Clustered

Clustering

MPI, DAPL, RDS, Sockets

Management

SNMP, SMI-SOpenView, Tivoli, BMC, Computer Associates

Protocols

Networking

Clustering

Storage

Virtualization

RDMA

Acceleration Engines

10/20/40 InfiniBand

10GigE

Data Center

Ethernet

Any Protocol over Any Convergence Fabric


The fastest infiniband technology

The Fastest InfiniBand Technology

  • InfiniBand 40Gb/s QDR in full productions

    • Multiple sites already utilized InfiniBand QDR performance

  • ConnectX InfiniBand - 40Gb/s server and storage adapter

    • 1usec application latency, zero scalable latency impact

  • InfiniScale IV - 36 InfiniBand 40Gb/s switch device

    • 3Tb/s switching capability in a single switch device


Infiniband qdr switches

InfiniBand QDR Switches

  • 1RU 36-port QSFP, QDR switch

    • Up to 2.88Tb/s switching capacity

    • Powered connectors for active cables

    • Available now

  • 19U 18 slot chassis, 324-port QDR switch

    • Up to 25.9Tb/s switching capacity

    • 18 QSFP ports per switch blade

    • Available: Q4 2009


Infiniband technology leadership

InfiniBand Technology Leadership

  • Industry Standard

    • Hardware, software, cabling, management

    • Design for clustering and storage interconnect

  • Price and Performance

    • 40Gb/s node-to-node

    • 120Gb/s switch-to-switch

    • 1us application latency

    • Most aggressive roadmap in the industry

  • Reliable with congestion management

  • Efficient

    • RDMA and Transport Offload

    • Kernel bypass

    • CPU focuses on application processing

  • Scalable for Petascale computing & beyond

  • End-to-end quality of service

  • Virtualization acceleration

  • I/O consolidation Including storage

The InfiniBand Performance Gap is Increasing

240Gb/s (12X)

120Gb/s

80Gb/s (4X)

60Gb/s

40Gb/s

Ethernet

20Gb/s

Fibre Channel

InfiniBand Delivers the Lowest Latency


Infiniband 40gb s qdr capabilities

InfiniBand 40Gb/s QDR Capabilities

  • Performance driven architecture

    • MPI latency 1us, Zero scalable latency

    • MPI bandwidth 6.5GB/s bi-dir, 3.25GB/s uni-dir

  • Enhanced communication

    • Adaptive/static routing, congestion control

  • Enhanced Scalability

    • Communication/Computation overlap

    • Minimizing systems noise effect (DOE funded project)

8-cores

16-cores


Hpc advisory council

HPC Advisory Council

  • Distinguished HPC alliance (OEMs, IHVs, ISVs, end-users)

  • Members activities

    • Qualify and optimize HPC solutions

    • Early access to new technology, mutual development of future solutions

    • Outreach

  • A community effort support center for HPC end-users

    • End-User Cluster Center

    • End- user support center

  • For details – HPC@mellanox.com


Thank you hpc@mellanox com

Thank YouHPC@mellanox.com

10


  • Login