1 / 0

Emulex I/O Solutions Storage Trends 2014

Emulex I/O Solutions Storage Trends 2014. Hilmar Beck Senior Sales Engineer. Next Generation Data Center Trends. Cloud. Network Security. Application Delivery. Flash Storage. Enterprise Virtualization. Virtualization and consolidation growth continues unabated and drives I/O bandwidth.

elvina
Download Presentation

Emulex I/O Solutions Storage Trends 2014

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Emulex I/O SolutionsStorage Trends 2014

    Hilmar Beck Senior Sales Engineer
  2. Next Generation Data Center Trends Cloud Network Security ApplicationDelivery Flash Storage Enterprise Virtualization Virtualization and consolidation growth continues unabated and drives I/O bandwidth Cloud strategies and converged networks require technology and tools to measurequality of service and guarantee it Increasing predatory attacks on IT portals press need for security solutions and forensics requires capture solutions Application specific performance demands are driving latencies and movement to 10/40/100Gb converged fabrics Deployment of SSDs in storage arrays is pushing networking bandwidth and latency IT Trends Driving New Networking Requirements
  3. What is Gen5 Fibre Channel? Speed-based naming changed to generational-based naming New advanced protocol services and features running multiple speeds Not just bandwidth improvement 16GFC, 8GFC, 4GFC speeds Maintains backward compatibility to previous FC generations 1Gb Fibre Channel 2Gb Fibre Channel 4Gb Fibre Channel 8Gb Fibre Channel Gen 5 Fibre Channel Gen 6 Fibre Channel 16Gb 32Gb Gen 1 Gen 2 Gen 3 Gen 4 Advanced features Multi-speed 16GFC/8GFC/4GFC
  4. Emulex ExpressLane™ SSD Latency Challenge QoS Solution Flash storage shares the SAN with traditional rotating media Mission-critical requests are stuck behind requests to slow storage Current queuing mechanisms are optimized for throughput, not latency ExpressLane creates separate queues for low latency storage-Identified by LUN Individual queues are coalesced for latency not overall bandwidth Queue associations are made from OneCommand Manager
  5. Emulex ExpressLane™ VM1 on Server 1 requests low priority I/O from Disk Array 1 VM2 requests high priority IO from Disk flash array Congestion occurs at HBA - VM2 traffic is stuck behind slower traffic ExpressLane prioritizes VM2 traffic over VM1 Provides Quality of Service, I/Os have consistent latency performance VM1 VM2 Server1 Server2 Congestion Point 3 VM1 Traffic VM2 Traffic 1 2 Disk/ Flash Array Disk Array Congestion at the HBA (multiple VMs requesting I/0)
  6. Emulex CrossLink™ SSD Isolation Challenge SSD Coordination Solution Flash requires coordination between nodes Current solutions (TCP/IP and UDP) suffer from “roundabout” stack-hopping Latency & QoS issues over Ethernet hamper coordination with storage devices Trust issues (storage networks deemed implicitly secure) Separate Ethernet connectivity requires additional wiring & management In-band FC Messaging solves latency & “stack-hopping” for cache or device coordination Uses standard & proven FC-CT protocol for FC and FCoE Simple interface- kernel or API
  7. Emulex CrossLink™Example: VM Migration Cache Prefill Server B (DestinationVM) Server A (Source VM) 3 VM migrates from Server A to Server B Server A also sends cache meta data via Crosslink Caching software in Server B processes metadata to create a “to do” list of pre-fill operations Caching software issues standard read operations to load cache Cache Prefill 1 2 4 TieredArray Cache Prefill (Virtual Machine Migration)
  8. Emulex & Brocade ClearLink (D_Port) Support ClearLink is a rich SAN diagnostic mode from Brocade Gen 5 FC switches (16GFC) ClearLink is now supported by all 16G (only) host-based Emulex LightPulse Gen 5 FC HBAs (XE201 based) Identifies and isolates marginal link level failures and performance issues: SFP, port, & cable Emulex # 1 Gen 5 HBAs + Brocade #1 Gen 5 switches, Together Providing Superior SAN-wide Diagnostics
  9. ClearLinkD_PortSaves Time & Money Troubleshooting A cable is faulty, but how do you find it? Can be hundreds of cables & SFPs in the SAN. You could start replacing cables and SFPs, one by one using trial & error Wastes time Or replace all cables and SFPsin the environment and try again Expensive & wastes time
  10. Emulex Gen 5 Fibre ChannelAdvanced Features for Best Flash/Cache, VMs Emulex ExpressLane™ Emulex CrossLink ™ ClearLink Enablement Priority Queuing Quality of Service (QoS) & performance to meet SLAs for application sensitive data, flash/cache Alleviates congested networks Maximizes ROI onFlash/cache systems In-band Message-passing Significantly reduces latency, improves CPU utilization Simplifies management Increases reliability Rich DiagnosticD_port Reduces downtime, saves time & money troubleshooting problems Industry-leading reliability
  11. Emulex Gen5 Accelerates Application PerformanceVersus 8GFC 41% 33% Faster workload completion for SQLServer More Transactions per Second for SQLServer (100-400 users) Database Applications Data Warehousing Workload Virtualization / Cloud Exchange Workload 3x 75% IOPS vs. 8GFC for Exchange Better throughput VMware ESXi
  12. Ethernet Connectvity Software Defined Convergence Discrete Networking Converged Networking Software Defined Convergence 3X Cost 3X Management 3X Cables 3X Switching FCoE Driving 10GbE Virtual Networking Led by Blade Servers Telco and Web Giants RDMA over CE (RoCE) Application Acceleration Virtual I/O (OVN, SR-IOV) Driving 40 and 100GbE Cloud, HPC, Big Data Unique “switch agnostic” positioning
  13. Performance - What is New with the OCe14000? Cloud, Big Data & SDN VNeX Virtualization 70% Faster Hybrid Cloud 50% Better CPU Efficiency Secure Multi-Tenant Clouds SDN & Workload Optimization Hybrid Cloud NVGRE/VXLAN Lowest CPU Utilization Save up to 50W/Server Highest Bandwidth & IOPS / Watt 4X Small Packet Performance 50% Better IOPS Web-scale Performance Operational Efficiency
  14. High Performance Networking Skyhawk RoCE RoCE is IB Over Existing Network Infrastructure What is Remote Direct Memory Management (RDMA)? Ability to remotely manage memory enabling server to server data movement directly between application memory without any CPU involvement A mechanism to provide this efficient data transfer with very low latencies on Ethernet Basically IB Over Ethernet (with PFC, etc.) Classic Ethernet is a Best Effort Protocol What is RDMA over Converged Ethernet (RoCE)? Delivers low latency for performance-critical and transaction intensive applications Better OPEX vs. Infiniband infrastructure which is requires a unique fabric which is difficult to deploy and manage Benefits of Skyhawk-R with RoCE
  15. Enterprise Cloud Needs RDMA VM Migration Big Data File Serving Internal testing - Emulex OCe14000 using SMB 3.0 on Windows Server 2012 R2
  16. Importance of File Transfer Performance RDMA delivers 77% faster transfers Our pockets are generating Big Data Growing amount of digital content on mobile devices Internal Testing: Emulex OCe14000 Using OFED 3.5.2 April 2-3, 2014 #2014IBUG
  17. Move to Software-defined Convergence Multi-Fabric Block I/O (FC, FCoE& iSCSI) Software-Defined Convergence Software Defined Convergence VNeX RoCE NIC Converged Multi-Fabric SAN/LAN FCoE/iSCSI NIC 6% CAGR Through 2016* 25% 10/40G CAGR Through 2016* Sources: *Crehan Research: Server-class Adapter and LOM Controller Long-range Forecast, July 2013 and **Dell’Oro Group: Fibre Channel Adapter Vendor Report 2Q2013, Aug. 2013
  18. Emulex RoCEOfferings XE100 Series 10/40GbE Network Controller FCoE iSCSI RoCE NIC OCe14101/2 10GbE SFP+ Ethernet Adapters Virtual Network Acceleration(VNeX) Enhanced Multi Channel XE100/OCe14000: 10G & 40G PCIe3 x8 OCe14401 40GbE QSFP+ Ethernet Adapters
  19. Where to use RDMA Initial Applications: Windows 2012 SMB/Direct Which implies SQL/Server, Hyper-V VM migration etc. Linux NFS/RDMA Microsoft will actively market Windows R2 and SMB/Direct Better CPU efficiency, faster VM migrations, etc. Convince the E/U they need or might want the option Others will be added in the future Based on OEM, Partners and E/U requests and appropriate business case
  20. RoCE in Action Application Application User Buffer User Buffer Latency TCP TCP UDP UDP Bypassed Latency IP IP Latency Standard 10GbE Adapter RDMA enabled 10GbE Adapter Outgoing Data Outgoing Data Incoming Data Incoming Data Without RoCE With RoCE
  21. Thank You!
More Related