1 / 44

Halifax VMUG

Halifax VMUG. Next Generation Best Practices for Storage and VMware Rob Nourse Snr . vSpecialist (Eastern Canada) EMC Corporation Rob.nourse @ emc.com. The “ Great ” Protocol Debate.

kimball
Download Presentation

Halifax VMUG

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Halifax VMUG Next Generation Best Practices for Storage and VMware Rob Nourse Snr. vSpecialist (Eastern Canada) EMC Corporation Rob.nourse@emc.com

  2. The “Great” Protocol Debate • Every protocol can Be Highly Available, and generally, every protocol can meet a broad performance band • Each protocol has different configuration considerations • Each Protocol has a VMware “super-power”, and also a “kryponite” • In vSphere, there is core feature equality across protocols Conclusion: there is no debate – pick what works for you! The best flexibility comes from a combination of VMFS and NFS

  3. First - Key Things To Know – “A” thru “F” Key Best Practices for 2011

  4. Leverage Key Docs A Key Best Practices circa 2010/2011

  5. Key Docs, and Storage Array Taxonomy Highly Recommended (Canadian way of saying “mandatory”) Reading: Key VMware Docs: • Fibre Channel SAN Configuration Guide • iSCSI SAN Configuration Guide • Storage/SAN Compatibility Guide …Understand VMware Storage Taxonomy: • Active/Active (LUN ownership) • Active/Passive (LUN ownership) • Virtual Port (iSCSI only)

  6. Key Docs, and Storage Array Taxonomy Highly Recommended (Canadian way of saying “mandatory”) Reading: Key Storage Partner Docs: • Each Array is very different. Storage varies far more vendor to vendor than servers do • Find, read, and stay current on your array’s Best Practices Doc – most are excellent. • Even if you’re NOT the storage team, read them – it will help you. http://www.emc.com/collateral/hardware/solution-overview/h2529-vmware-esx-svr-w-symmetrix-wp-ldv.pdf http://www.emc.com/collateral/hardware/technical-documentation/h5536-vmware-esx-srvr-using-celerra-stor-sys-wp.pdf http://www.emc.com/collateral/software/solution-overview/h2197-vmware-esx-clariion-stor-syst-ldv.pdf

  7. Setup Multipathing Right B Key Best Practices circa 2010/2011

  8. Understanding the vSphere Pluggable Storage Architecture

  9. What’s “out of the box” in vSphere 4.1? [root@esxi ~]# vmware -vVMware ESX 4.1.0 build-260247[root@esxi ~]# esxcli nmp satp listName                 Default PSP       DescriptionVMW_SATP_SYMM        VMW_PSP_FIXED     Placeholder (plugin not loaded)VMW_SATP_SVC         VMW_PSP_FIXED     Placeholder (plugin not loaded)VMW_SATP_MSA         VMW_PSP_MRU       Placeholder (plugin not loaded)VMW_SATP_LSI         VMW_PSP_MRU       Placeholder (plugin not loaded)VMW_SATP_INV         VMW_PSP_FIXED     Placeholder (plugin not loaded)VMW_SATP_EVA         VMW_PSP_FIXED     Placeholder (plugin not loaded)VMW_SATP_EQL         VMW_PSP_FIXED     Placeholder (plugin not loaded)VMW_SATP_DEFAULT_AP  VMW_PSP_MRU       Placeholder (plugin not loaded)VMW_SATP_ALUA_CX     VMW_PSP_FIXED_AP  Placeholder (plugin not loaded)VMW_SATP_CX          VMW_PSP_MRU       Supports EMC CX that do not use the ALUA protocolVMW_SATP_ALUA        VMW_PSP_RR        Supports non-specific arrays that use the ALUA protocolVMW_SATP_DEFAULT_AA  VMW_PSP_FIXED     Supports non-specific active/active arraysVMW_SATP_LOCAL       VMW_PSP_FIXED     Supports direct attached devices

  10. What’s “out of the box” in vSphere? • PSPs: • Fixed (Default for Active-Active LUN ownership models) • All IO goes down preferred path, reverts to preferred path after original path restore • MRU (Default for Active-Passive LUN ownership models) • All IO goes down active path, stays after original path restore • Round Robin • n IO operations goes down active path then rotate (default is 1000) HOWTO – setting PSP for a specific device (can override default selected by SATP detected ARRAYID): esxcli nmp device setpolicy --device <device UID> --psp VMW_PSP_RR (check with your vendor first!)

  11. Changing Round Robin IOOperationLimit esxcli nmp roundrobin setconfig --device <device UID> –iops check with your storage vendor first! This setting can cause problems on arrays. Has been validated ok, but not necessary in most cases

  12. Effect of different RR IOOperationLimit settings NOTE: This is with a SINGLE LUN. This is the case where the larger IOOperationLimit default is the worst In a real-world environment – lots of LUNs and VMs results in decent overall loadbalancing Recommendation – if you can, stick with the default

  13. What is Asymmetric Logical Unit (ALUA)? • Many storage arrays have Active/Passive LUN ownership • All paths show in the vSphere Client as: • Active (can be used for I/O) • I/O is accepted on all ports • All I/O for a LUN is serviced on its owning storage processor • In reality some paths are preferred over others • Enter ALUA to solve this issue • Supported introduced in vSphere 4.0 SP A SP B LUN

  14. What is Asymmetric Logical Unit (ALUA)? • ALUA Allows for paths to be profiled • Active (can be used for I/O) • Active (non-optimized – not normally used for I/O) • Standby • Dead • Ensures optimal path selection/usage by vSphere PSP and 3rd Party MPPs • Supports Fixed, MRU, & RR PSP • Supports EMC PowerPath/VE • ALUA is not supported in ESX 3.5 SP A SP B LUN

  15. Understanding MPIO MPIO is based on “initiator-target” sessions – not “links”

  16. MPIO Exceptions – Windows Clusters Among a long list of “not supported” things: • NO Clustering on NFS datastores • No Clustering on iSCSI, FCoE (unless using PP/VE) • No round-robin with native multipathing (unless using PP/VE) • NO Mixed environments, such as configurations where one cluster node is running a different version of ESX/ESXi than another cluster node. • NO Use of MSCS in conjunction with VMware Fault Tolerance. • NO Migration with vMotion of clustered virtual machines. • NO N-Port ID Virtualization (NPIV) • You must use hardware version 7 with ESX/ESXi 4.1

  17. APP APP APP APP APP APP APP APP APP APP APP APP APP APP APP APP APP APP APP APP APP APP APP APP APP APP APP APP APP APP APP APP APP APP APP APP APP APP APP APP APP APP APP APP OS OS OS OS OS OS OS OS OS OS OS OS OS OS OS OS OS OS OS OS OS OS OS OS OS OS OS OS OS OS OS OS OS OS OS OS OS OS OS OS OS OS OS OS Shared Storage STORAGE PowerPath – a Multipathing Plugin (MPP) • Simple Storage manageability • Simple Provisioning = “Pool of Connectivity” • Predictable and consistent • Optimize server, storage, and data-path utilization • Performance and Scale • Tune infrastructure performance, LUN/Path Prioritization • Predictive Array Specific Load Balancing Algorithms • Automatic HBA, Path, and storage processor fault recovery • Other 3rd party MPPs: • Dell/Equalogic PSP • Uses a “least deep queue” algorithm rather than basic round robin • Can redirect IO to different peer storage nodes • See this at the Dell/EqualLogic booth PowerPath PowerPath PowerPath PowerPath

  18. NFS Considerations

  19. General NFS Best Practices • Start with Vendor Best Practices: • EMC Celerra H5536 & NetApp TR-3749 • While these are constantly being updated, at any given time, they are authoritative • Use the EMC & NetApp vCenter plug-ins, automates best practices • Use Multiple NFS datastores & 10GbE • 1GbE requires more complexity to address I/O scaling due to one data session per connection with NFSv3

  20. General NFS Best Practices - Timeouts • Configure the following on each ESX server (automated by vCenter plugins): •  NFS.HeartbeatFrequency = 12 • NFS.HeartbeatTimeout = 5 • NFS.HeartbeatMaxFailures = 10 • Increase Guest OS time-out values to match • Back up your Windows registry. • Select Start>Run, regedit • In the left‐panel hierarchy view, double‐click HKEY_LOCAL_MACHINE> System> CurrentControlSet> Services> Disk. • Select the TimeOutValue and set the data value to 125 (decimal). • Note: this is not reset when VMtools are updated • Increase Net.TcpIpHeapSize (follow vendor recommendation)

  21. General NFS Best Practices – Traditional Ethernet switches Mostly seen with older 1GbE switching platforms Each switch operates independently More complex network design Depends on routing, requires two (or more) IP subnets for datastore traffic Multiple Ethernet options based on Etherchannel capabilities and preferences Some links may be passive standby links

  22. General NFS Best Practices – Multi-Switch Link Aggregation Allows two physical switches to operate as a single logical fabric Much simpler network design Single IP subnet Provides multiple active connections to each storage controller Easily scales to more connections by adding NICs and aliases Storage controller connection load balancing is automatically managed by the EtherChannel IP load-balancing policy

  23. General NFS Best Practices – HA and Scaling 10GbE? Yes No Support multi-switch Link aggr? One VMKernel port & IP subnet Yes Use multiple links with IP hash load balancing on the NFS client (ESX) Use multiple VMKernel Ports & IP subnets Use multiple links with IP hash load balancing on The NFS server (array) Use ESX routing table Storage needs multiple sequential IP addresses Storage needs multiple sequential IP addresses

  24. iSCSI & NFS – Ethernet Jumbo Frames • What is an Ethernet Jumbo Frame? • Ethernet frames with more than 1500 bytes of payload (9000 is common) • Commonly ‘thought of’ as having better performance due to greater payload per packet / reduction of packets • Should I use Jumbo Frames? • Supported by all major storage vendors & VMware • Adds complexity & performance gains are marginal with common block sizes • FCoE uses MTU of 2240 which is auto-configured via switch and CAN handshake • All IP traffic transfers at default MTU size Stick with the defaults when you can

  25. iSCSI & NFS caveat when used together • Remember – iSCSI and NFS network HA models = DIFFERENT • iSCSI uses vmknics with no Ethernet failover – using MPIO instead • NFS client relies on vmknics using link aggregation/Ethernet failover • NFS relies on host routing table • NFS traffic will use iSCSI vmknic and results in links without redundancy • Use of multiple session iSCSI with NFS is not supported by NetApp • EMC supports, but best practice is to have separate subnets, virtual interfaces

  26. Summary of “Setup Multipathing Right” • VMFS/RDMs • Round Robin policy for NMP is default BP on most storage platforms • PowerPath/VE further simplifies/automates multipathing on all EMC (and many non-EMC) platforms. • Notably supports MSCS/WSFC including vMotion and VM HA • NFS • For load balancing, distribute VMs across multiple datastores on multiple I/O paths. Follow the resiliency procedure in the TechBook to ensure VM resiliency to storage failover and reboot over NFS

  27. Alignment = good hygiene C Key Best Practices circa 2010/2011

  28. “Alignment = good hygiene” • Misalignment of filesystems results in additional work on storage controller to satisfy IO request • Affects every protocol, and every storage array • VMFS on iSCSI, FC, & FCoE LUNs • NFS • VMDKs & RDMs with NTFS, EXT3, etc • Filesystems exist in the datastore and VMDK Datastore Alignment VMFS 1MB-8MB VMFS 1MB-8MB Block Array 4KB-64KB Array 4KB-64KB Chunk Chunk Chunk

  29. “Alignment = good hygiene” • Misalignment of filesystems results in additional work on storage controller to satisfy IO request • Affects every protocol, and every storage array • VMFS on iSCSI, FC, & FCoE LUNs • NFS • VMDKs & RDMs with NTFS, EXT3, etc • Filesystems exist in the datastore and VMDK Datastore Alignment VMFS 1MB-8MB VMFS 1MB-8MB Block Array 4KB-64KB Array 4KB-64KB Chunk Chunk Chunk

  30. “Alignment = good hygiene” • Misalignment of filesystems results in additional work on storage controller to satisfy IO request • Affects every protocol, and every storage array • VMFS on iSCSI, FC, & FCoE LUNs • NFS • VMDKs & RDMs with NTFS, EXT3, etc • Filesystems exist in the datastore and VMDK FS 4KB-1MB Cluster Cluster Cluster Guest Alignment VMFS 1MB-8MB VMFS 1MB-8MB Block Array 4KB-64KB Array 4KB-64KB Chunk Chunk Chunk

  31. “Alignment = good hygiene” • Misalignment of filesystems results in additional work on storage controller to satisfy IO request • Affects every protocol, and every storage array • VMFS on iSCSI, FC, & FCoE LUNs • NFS • VMDKs & RDMs with NTFS, EXT3, etc • Filesystems exist in the datastore and VMDK FS 4KB-1MB Cluster Cluster Cluster Guest Alignment VMFS 1MB-8MB Block Array 4KB-64KB Chunk Chunk Chunk

  32. Alignment – Best Solution: “Align VMs” • VMware, Microsoft, Citrix, EMC all agree, align partitions • Plug-n-Play Guest Operating Systems • Windows 2008, Vista, & Win7 • They just work as their partitions start at 1MB • Guest Operating Systems requiring manual alignment • Windows NT, 2000, 2003, & XP (use diskpart to set to 1MB) • Linux (use fdisk expert mode and align on 2048 = 1MB)

  33. Alignment – “Fixing after the fact” • VMFS is misaligned • Occurs If you created the VMFS via CLI and not via vSphere client and didn’t specify an offset. • Resolution: • Step 1: Take an array snapshot/backup • Step 2: Create new datastore & migrate VMs using SVMotion • Filesystem in the VMDK is misaligned • Occurs If you are are using older OSes and didn’t align when you created the guest filesystem • Resolution: • Step 1: Take an array snapshot/backup • Step 2: Use tools to realign (all VM to be shutdown) • GParted (free, but some assembly required) • Quest vOptimizer (good mass scheduling and reporting)

  34. Leverage Free Plugins/VAAI D Key Best Practices circa 2010/2011

  35. “Leverage Free Plugins and VAAI” • Use Vendor plug-ins for VMware vSphere • All provide better visibility • Some provide integrated provisioning • Some integrate array features like VM snapshots, dedupe, compression and more • Some automate multipathing setup • Some automate best practices and remediation • Most are FREE • VAAI – it is just “on” • With vSphere 4.1, VAAI increases VM scalability and reduces the amount of I/O traffic sent between the host and storage system and makes “never put more than ___ VMs per datastore” a thing of the past. • Some individual operations can be faster also (2-10x!)

  36. KISS on Layout E Key Best Practices circa 2010/2011

  37. “KISS on Layout” • Use VMFS and NFS together – no reason not to • Strongly consider 10GbE, particularly for new deployments • Avoid RDMs, use “Pools” (VMFS or NFS) • Make the datastores big • VMFS – make them ~1.9TB in size (2TB – 512 bytes is the max for a single volume), 64TB for a single filesystem • NFS – make them what you want (16TB is the max) • With vSphere 4.0 and later, you can have many VMs per VMFS datastore – and VAAI increases this to a non-issue. • On the array, default to Storage Pools, not traditional RAID Groups / Hypers • Default to single extent VMFS datastores • Default to Thin Provisioning models at the array level, optionally at the VMware level. • Make sure you enable vCenter managed datastore alerts • Make sure you enable Unisphere/SMC thin provisioning alerts and auto-expansion • Use “broad” data services – i.e. FAST, FAST Cache (things that are “set in one place”)

  38. Use SIOC if you can F Key Best Practices circa 2010/2011

  39. “Use SIOC if you can” • This is a huge vSphere 4.1 feature • “If you can” equals: • vSphere 4.1, Enterprise Plus • VMFS (NFS targeted for future vSphere releases – not purely a qual) • Enable it (not on by default), even if you don’t use shares – will ensure no VM swamps the others • Bonus is you will get guest-level latency alerting! • Default threshold is 30ms • Leave it at 30ms for 10K/15K, increase to 50ms for 7.2K, decrease to 10ms for SSD • Fully supported with array auto-tiering - leave it at 30ms for FAST pools • Hard IO limits are handy for View use cases • Some good recommended reading: • http://www.vmware.com/files/pdf/techpaper/VMW-vSphere41-SIOC.pdf • http://virtualgeek.typepad.com/virtual_geek/2010/07/vsphere-41-sioc-and-array-auto-tiering.html • http://virtualgeek.typepad.com/virtual_geek/2010/08/drs-for-storage.html • http://www.yellow-bricks.com/2010/09/29/storage-io-fairness/

  40. Second – knowing when to break the rules… Top 5 Exceptions for said best practices

  41. 5 Exceptions to the rules • Create “planned datastore designs” (rather than big pools and correct after the fact) for larger IO use cases (View, SAP, Oracle, Exchange) • Use the VMware + Array Vendor reference architectures. • Generally the cases where > 32 HBA queue & consider > 1 vSCSI adapters • Over time, SIOC may prove to be a good approach • Some relatively rare cases where large spanned VMFS datastores make sense • When NOT to used “datastore pools”, but pRDMs (narrow use cases!) • MSCS/WSFC • Oracle – pRDMs and NFS can do rapid VtoP with array snapshots • When NOT to use NMP Round Robin • Arrays that are not active/active AND use ALUA using only SCSI-2 • When NOT to use array thin-provisioned devices • Datastores with extremely high amount of small block random IO • In FLARE 30, always use storage pools, LUN migrate to thick devices if needed • When NOT to use the vCenter plugins? Trick question – always “yes”

  42. Finally – a peek into the future… Amazing things we’re working on….

  43. 5 Amazing things we’re working on…. • Storage Policy • How should storage inform vSphere of capabilities and state (and vice versa) • SIOC and Auto-Tiering complement today, how can we integrate? • How can we embed VM-level Encryption? • “Bolt-on” vs. “Built for Purpose” using Virtual Appliance constructs • EMC has 3 shipping virtual storage appliances (Atmos/VE, Avamar/VE, Networker/VE) • Every EMC array is really a cluster of commodity servers with disks • What more could we do to make “bolt on value” easier this way? • “follow the breadcrumb trail”: http://stevetodd.typepad.com/my_weblog/2010/09/csx-technology.html • Maturing scale-out NAS/pNFS models • Desired, not demanded in enterprise, demanded, not desired for scale-out public cloud NAS (EMC has GA’ed pNFS, but vSphere client is still NFSv3) • Large-scale, long distance geo-dispersion/federation of transactional workloads • VM Teleportation – around the world, at many sites • Geo-location to meet FISMA and other standards • Making Object storage act transactional – for real • Would blend best of all worlds & enable VM-level policy and enforcement.

  44. THANK YOU

More Related