1 / 37

Redefining Storage Economics

Redefining Storage Economics. San Diego VMUG July 25 th , 201. Storage for Performance Coraid Introduction Customer Case Study Performance Overview Throughput IOPS RAID Math Disk bound/controller bound Testing Tools & Tips Product Demo Q&A. Technology Overview.

glenda
Download Presentation

Redefining Storage Economics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Redefining Storage Economics

  2. San Diego VMUG July 25th, 201 • Storage for Performance • Coraid Introduction • Customer Case Study • Performance Overview • Throughput • IOPS • RAID Math • Disk bound/controller bound • Testing Tools & Tips • Product Demo • Q&A

  3. Technology Overview

  4. Disrupting the SAN Industry • Ethernet SAN technology: EtherDrive® • 5-8x price performance advantage vs legacy • Radically simplified SAN topology In production at more than 1,400 companies and federal agencies Optimized for VMware virtualization Works with existing infrastructure

  5. Evolution of SAN

  6. Transparent Scalability Industry Standard Hardware No “Controller Headaches” High Performance 1GbE / 10GbE 530MB - 1800MB / sec Low Budget Impact $600 - $1250 / TB The Consumer is asking for …………..Evolving to Cloud The Future of Storage… RAISE ELIMINATE REDUCE CREATE “Ethernet SANs are less complex, perform with better economics” Enterprise Strategy Group – Jan 2011

  7. Storage Challenges A Legacy SAN Topology: Rigid, Expensive B Bottleneck: Dynamic Virtual Workloads Expensive HBAs, Static Workload Complex Multipathing Controlled Data Layout on Drives Server Cluster with VMotion Extremely Complex SAN Management Chaotic Data Layout on Drives: Head Contention • Static connections • Scale-up compute tied to scale up storage • Predictable application access profiles • Dynamic application to storage relationships • Scale-out compute infrastructure • Unpredictable, variable application access profiles 7

  8. ESG Research: The Evolution of Server Virtualization Enterprise Strategy Group The Evolution of Server Virtualization Bowker / Oltsik – November 2010

  9. ESG Research: The Evolution of Server Virtualization • Impact on Customers: • “We have increased our use of SAN-based storage…” • “It has caused us to purchase from new storage vendors…”

  10. ESG Research: The Evolution of Server Virtualization

  11. Coraid EtherDrive: Scale-out Ethernet SAN EtherDrive: Dynamic Virtual Workloads EtherDrive Benefits • 5-8x Price Performance Advantage • “Bare metal performance” • Off-the-shelf hardware • Operational Simplicity: • Eliminates complex topologies and multipathing • Simple recovery – Zero Hour Support • Scale-out • No controller bottleneck • Grow in-line with business demand Server Cluster with vMotion™ AoE EtherDriveStorage Arrays Ethernet (1Gb / 10Gb) Massively Parallel 11

  12. Coraid = Ethernet-SAN

  13. BackgroundDevelopment Timeline

  14. QOS with Coraid : Modular Flexible SLA’s • Multi-tiered storage in a single shelf • Test small, grow in stages while preserving low $/desktop

  15. EtherDrive™ Deployment Methods • Ethernet cables can be connected directly to a host server or the connections can be made through a standard Ethernet switch • A high availability SAN fabric should be created if possible by leveraging dual switches Direct Attached Note: Each SR/SRX port can be directly to a separate host Networked High Availability Fabric

  16. Coraid – Filesystem Agnostic Blocks • Coraid works well with: • Virtualization • File sharing • Clustering • Single-namespace • OS operations • High performance computing • Etc. Coraid EtherDrive VMFS ZFS NTFS GPFS NFS

  17. ATA over Ethernet - AoE Protocol • Layer 2 Protocol • Non-routable • SAN Fabric can span multiple Layer 2 Switches • Requires no IP Addressing Scheme • Not Connection-Based • Physical Connection to SAN Fabric is only requirement • Simplified Storage Configuration • Transmission Controls Built-In • iSCSI utilizes Layer 4 TCP for Transmission Control • Fibre Channel uses proprietary $$ hardware • Simple and Efficient • 12 Page Specification • http://support.coraid.com/documents/AoEr11.txt

  18. Coraid EtherDrive SRX-Series EtherDrive SRX Overview: • High-density Ethernet SAN array, targeted at cloud and large enterprise environments • SRX-Series supports up to: • 36 Drives (24/12 in 4U) • 108 TB with 3 TB drives • SATA/SAS/SSD - mix in same shelf • 10 Gb Ethernet • 1,800+ MB/sec throughput • Starting at under $600/TB Flexible, High Performance, Scale Out Storage • EtherDrive SRX Options: • SRX2800 – 2U, 16x 3.5” disks • SRX3200 – 4U, 24x 3.5” disks • SRX3500 – 2U, 24x 2.5” disks • SRX4200 – 4U, 36x 3.5” disks 18

  19. Customer Case Study

  20. Customer Case Study • Cloud Application Provider • 21 ESXi 4.1 Servers • 8000 IOPs • 30 TB • Requirements: • Cost sensitive • Cannot affect uptime • Flexible platform • GigE Connectivity

  21. Customer Case Study • 3 SRX 3200 Chassis • 48 7.2k 2TB SATA disks • 32 15k 300GB SAS disks • Able to leverage additional capacity for resting snap deltas • Non-disruptive migration to 10gigE SAN • Hot spare chassis • StressLinux/VI Client/SAN combo performance monitoring

  22. Coraid Architecture Star Topology Simplicity • Scale out architecture allows for SAN to grow linearly in terms of performance and cost • Additional shelves can be plugged directly into the same layer 2 network segment to expand the storage easily LAN SAN

  23. Adding Compute Resources • Expand compute as easily as storage • No path management required • Connecting Coraid HBA’s in standby servers immediately attaches all shared storage, making the node instantly ready for the cluster • Add compute and memory resources to cluster on-demand

  24. Adding Storage Resources • No single point of failure scale out • Start with single chassis • Mirror critical volumes between chassis • Critical Datastores • SSD Replica Volumes • High performance pools

  25. Performance Overview

  26. Throughput vs IOPs • It’s not fast enough! • What defines performance – is it mb/s or IOPs? • Determine if the workload is random or sequential • Seemingly sequential workloads can be random based on the amount of access or shared VMs on the system • Random environments will require attention to the IOPs of the storage system • Throughput intensive environments will mostly focus on connectivity type – gigE, 10gigE, multipathing

  27. Throughput Options • Marketing vs. Real Numbers • 3gbps, 6gbps, etc • The drives will spin at around 100mb/s MAX • Add more spindles • Add more backend connections • Add more host based connections • Using testing to determine where the bottlenecks are • Choose a lightweight transport mechanism • Coraid delivers 1,200mb/s – line rate over 10gigE

  28. IOPs Options • IOPS are literally a measurement of IO per second • Mechanical limitation of the disk’s head to move across the platter back and forth within a second for a random workload • All about adding spindles • Use the right tools to determine the IOP requirement • Determine how the RAID type will affect the available IOPs

  29. IOPs Options • Disk type always dictates IOPs • More available moving spindles and heads, more available IOPs

  30. RAID Math

  31. RAID Math • Each RAID configuration will provide the disk activity with a different IOP penalty • Penalties are applied to write activity, but not read activity

  32. RAID Math • Front end IOPs: what the host actually sees • Back end IOPs: the total amount of IOPs available to the SAN • Use the following formula to calculate the needed backend IOPs (TOTAL IOps × % READ)+ ((TOTAL IOps × % WRITE) ×RAID Penalty) = Needed backend IOPs

  33. Monitoring Tools

  34. Testing Tools • StressLinux (www.stresslinux.org) • ESXTop • ESXPlot • VI Monitoring • SAN Based

  35. SAN Based Before Transfer: Port0: Total Packets Received: 22603770 Total Packets Transmitted: 21702657 Port1: Total Packets Received: 22603783 Total Packets Transmitted: 21702648 After Transfer: Port0: Total Packets Received: 22615833 ->12063 Total Packets Transmitted: 21713926-> 11269 Port1: Total Packets Received: 22615843 -> 12060 Total Packets Transmitted: 21713915 ->11267 ~ # grep . /proc/ethdrv/ifstats *** ctlrindx=2 *** EHBA-2-E-RJ45 00004100a6e40000 *** seen=00000081 Ims=000000dd Icr=00000000 c->im=000000dd Rdbal=43566400 Rdbah=00000000 Tdbal=43568400 Tdbah=00000000 Rxdctl=02010000 Packets Received (64 Bytes): 15734677 2137009 Packets Received (512-1023 Bytes): 481932 89656 Packets Received (1024-mtu Bytes): 6202225 649655 Good Packets Received: 22418834 2876320 Broadcast Packets Received: 388305 73600 Good Packets Transmitted: 21702657 2737360 Good Octets Received: 47876681928 5509227416 Good OctetsTransmitted: 84974576160 6481749920 Total Octets Received: 47890007059 5511657455 Total OctetsTransmitted: 84974576160 6481749920 Total Packets Received: 22603770 2911266 Total Packets Transmitted: 21702657 2737360 Packets Transmitted (64 Bytes): 6724309 745100 Packets Transmitted (512-1023 Bytes): 144998 18260 Packets Transmitted (1024-mtu Bytes): 14833350 1974000 Broadcast Packets Transmitted: 21566 4085 Interrupt Assertion: 29782386 4010570 InterruptRxPktTimer: 22418834 2876320 Interrupt RxAbsTimer: 22135042 2871060 InterruptTxPktTimer: 21702657 2737360 Interrupt Tx Desc Low: 21702657 2737360 nrd=256 rdfree=249 rxerr=0 nobufs=0 rdh=146 rdt=139 drdh=146 drdt=139 ntd=256 txavail=252 dropped=13 tdh=80 tdt=83 dtdh=80 dtdt=83 rintr=20331826 tintr=10434138 lintr=0 intr=29515795 link=1000 *** EHBA-2-E-RJ45 00004100a7640000 *** seen=00000081 Ims=000000dd Icr=00000000 c->im=000000dd Rdbal=43b8ce00 Rdbah=00000000 Tdbal=43b8ee00 Tdbah=00000000 Rxdctl=02010000 Packets Received (64 Bytes): 15729773 2137011 Packets Received (512-1023 Bytes): 487078 89733 Packets Received (1024-mtu Bytes): 6201971 649574 Good Packets Received: 22418822 2876318 Broadcast Packets Received: 388305 73600 Good Packets Transmitted: 21702648 2737358 Good Octets Received: 47877720360 5508772216 Good OctetsTransmitted: 84978060984 6479224616 Total Octets Received: 47891252560 5511202352 Total OctetsTransmitted: 84978060984 6479224616 Total Packets Received: 22603783 2911265 Total Packets Transmitted: 21702648 2737358 Packets Transmitted (64 Bytes): 6729557 745077 Packets Transmitted (512-1023 Bytes): 139606 18584 Packets Transmitted (1024-mtu Bytes): 14833485 1973697 Broadcast Packets Transmitted: 21566 4085 Interrupt Assertion: 29760202 4007037 InterruptRxPktTimer: 22418822 2876318 Interrupt RxAbsTimer: 22138946 2870738 InterruptTxPktTimer: 21702648 2737358 Interrupt Tx Desc Low: 21702648 2737358 nrd=256 rdfree=230 rxerr=0 nobufs=0 rdh=134 rdt=108 drdh=134 drdt=108 ntd=256 txavail=252 dropped=7 tdh=100 tdt=103 dtdh=100 dtdt=103 rintr=20325656 tintr=10436036 lintr=0 intr=29490268 link=1000

  36. Demo

  37. References • Configuring Coraid EtherDrive SAN appliances and deploying with ESX/ESXi 3.5 and 4.x (Partner Support) • http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1031322 • Vmware and Coraid Technology Alliance Partner • http://www.coraid.com/pdf/app_notes/VMW_1110_Coraid_TAP.pdf

More Related