1 / 81

VMworld 2011

VMworld 2011. Rick Scherer Cloud Architect – EMC Corporation @rick_vmwaretips – VMwareTips.com. @ the Venetian, Las Vegas. Agenda. VMworld 2011 Highlights VMware Announcements EMC Announcements Hands On Labs Details Q & A. VMworld 2011 Highlights. Show Dates: August 29 – Sept 1 st

pilis
Download Presentation

VMworld 2011

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. VMworld 2011 Rick Scherer Cloud Architect – EMC Corporation@rick_vmwaretips – VMwareTips.com @ the Venetian, Las Vegas

  2. Agenda VMworld 2011 Highlights VMware Announcements EMC Announcements Hands On Labs Details Q & A

  3. VMworld 2011 Highlights • Show Dates: August 29 – Sept 1st • VMworld Theme: YourCloud. Own It • Attendees: 19,000+ • Audience Profile • IT Manager (44%), Architect,Sys Admin, C- Level, IT Director

  4. A Few Statistics by Paul during Keynote

  5. VMworld Session Highlights Over 175 Unique Breakout Sessions and 20+ Labs • Sessions by Tracks & Industries • Cloud Application Platform • Business Continuity • Mgt. and Operations • Security and Compliance • Virtualizing Business Critical Applications • vSphere • End-User Computing • Partner (For VMware Partners Only) • Technology Partners/Sponsors • Technology Exchange for Alliance Partners

  6. VMworld 2011 Numbers …… • 200+ Exhibitors in Solutions Exchange • Over 19,000 attendees • 7 EMC Lead sessions • NetApp had 3 sessions • HP had 2 sessions • Dell and IBM had 1 session each • Over 13,500 Labs attended • Over 148,100 VMs deployed • 175 unique Breakout Sessions • Staged more than 470lab seats

  7. VMwareAnnouncements

  8. VMware Introduces Portal for Database as a Service--- DBaaS • Reduce Database Sprawl • Self-service Database Management • Extend virtual resource capabilities to the data tier

  9. VMware vCloud AcceleratesJourney to Enterprise Hybrid Cloud • VMware vCloud Connector 1.5 • Fast, reliable transfer between private and public clouds • vcloud.vmware.com • Find, connect with, and test drive Service Provider vCloud offerings • Disaster recovery to the cloud with vCenter Site Recovery Manager 5 • Cloud based Disaster Recovery Services • VMware Cloud Infrastructure Suite • The Foundation for the Enterprise Hybrid Cloud • VMware vCenter Site Recovery Manager 5 • VMware vCloud Director 1.5 • VMware vShield 5 Together, these products will help customers transform IT to drive greater efficiency of existing investments and improve operational agility.

  10. End User Computing in The Post-PC ERA • VMware View 5 • VMware Horizon • Projects AppBlastand Octopus

  11. VMware and Cisco Collaborateon Cloud Innovation • VXLAN submitted to IETF • Isolation and segmentation benefits of layer 3 networks, while still being able to travel over a flat layer 2 network.

  12. Technology Preview: vStorage APIs for VM and Application Granular Data Management • Preview of VM Volumes, No Date Given • Major change to storage model – everything is at the vApp layer • Works with Block and NAS models • 5 storage vendors implementations demonstrated • EMC had a strong demonstration footprint

  13. How It All Comes Together • Storage resources • Administrative domain • Visible to all servers in an ESX cluster • One or more Capacity Pools may offer the required Storage Capabilities • Management via VM Granular Storage API Provider ESX Server NFSDatastore VMFSDatastore VM Volume VM Storage Profile Capacity Pool Capacity Pool NFS ReadWriteCreate Delete VM Granular Storage APICreate Delete Snap Clone Bind • Data path to VM Volumes • Block: FC, FCoE, iSCSI • File: NFS SCSI or NFS ReadWrite SCSI ReadWrite • Storage resources • One or more Storage Profiles advertising different Storage Capabilities • Manage VM Volumes Storage System Storage System Capacity Pool IO De-mux VM Granular Storage API Provider VM Volume • One-to-one mapping of data VM Volume to VMDK • Meta VM Volume for non-data VM files • Support VM Granular Storage web-service API • Delivered by the storage vendor • On or off array

  14. EMCAnnouncements

  15. What Does EMC Support with vSphere 5 and When? • Storage platforms on vSphere 5 HCL: VPLEX, VMAX, VMAXe, VNX, VNXe, CX4, NS • EMC Virtual Storage Integrator 5 (VSI) - Supported • VAAI New API Support • VMAX – Already included in latest Enginuity but testing still underway, not supported yet • VNX – Beta of new operating code to support new APIs underway • VNXe – Target for file level support is Q4 2011, 2012 for block • Isilon – Support coming in 2012 • VASA Support • EMC general support GA’s 9/23 • Will support block protocols for VMAX, DMX, VNX, CX4, NS • File support for these platforms and others in 2012 • PowerPath VE • Day 1 support including updated simpler licensing model and support for Stateless ESX

  16. EMC Releases VSI 5, A 5th Gen Plug-in for VMware • Full support for VMware vSphere 5 • Provisions in minutes • New and Robust Role Based Access Controls

  17. EMC Breaks World Record with vSphere 5 1 Million IOps Through a Single vSphere Host

  18. EMC Breaks World Record with vSphere 5 New World Record: 10GBps from vSphere 5

  19. EMC VNX Accelerates VMware View 5.0 Boot500 Desktops in 5 Minutes • Boost Performance during log-in storms • Maximize Efficiency • Simplify Management

  20. EMC Technology Preview – Scale Out NAS • Characteristics of true scale-out NAS model • Multiple Nodes Presenting a Single, Scalable File System and Volume • N-Way, Scalable Resiliency • Linearly Scalable IO and Throughput • Storage Efficiency

  21. Tech Preview: Storage, Compute, PCIe Flash Uber Mashup • Emerging Workloads have Different Requirements • Some benefit by moving compute closer to storage • Others benefit by moving data closer to compute • EMC demonstrated both • The effect of Lightning IO cards on latency sensitive apps • The effect of running a VM on a storage node for bandwidth constrained apps

  22. EMC Technology Preview: Avamar vCloud Protector • To backup and restore vCloud Director is not simple • Granular, reliable, and fast tenant based restores are a must • Self serve backup and restore is required in a cloud

  23. vShield 5 and RSA integration Can The Virtual Be More Secure Than The Physical? VMware vShield App with Data Security • Uses RSA DLP Technology • Check compliance against global standards • Catch data leakage • Integrates with RSA Envision and Archer eGRC

  24. Hands On Labs

  25. VMware Hands on Labs • 10 Billion+ IOs served • ~148,138 VMs created and destroyed over 4 days. 4,000+ MORE than VMworld 2010 • 2 X EMC VNX 5700’s • 131.115 Terabytes ofNFS traffic • 9.73728 billion of NFS IOPS • VNX internal avg NFS read latency of 1.484ms • VNX internal avg NFS write latency 2.867ms

  26. EMC vLabs • Infrastructure running on a pair of EMC VNX7500’s • Each loaded with SSD, FAST Cache, FAST VP and 10GbE • Most of the load on NFS with 3 data movers (and 1 standby) Statistics on Types of Demos

  27. VM HAGround-Up Rewrite

  28. VM HA Enhancement Summary • Enhanced vSphere HA core • a foundation for increased scale and functionality • Eliminates common DNS issues • Multiple Communication Paths Can leverage storage as well as the mgmt network for communications Enhances the ability to detect certain types of failures and provides redundancy • Also: • IPv6 Support • Enhanced Error Reporting • Enhanced User Interface • Enhanced Deployment Mechanism

  29. vSphere HA Primary Components FDM FDM FDM FDM • Every host runs an agent. • Referred to as ‘FDM’ or Fault Domain Manager • One of the agents within the cluster is chosen to assume the role of the Master • There is only one Master per cluster during normal operations • All other agents assume the role of Slaves • There is no more Primary/Secondary concept with vSphere HA ESX 02 ESX 03 ESX 04 ESX 01 Useful for VPLEX and Stretched Clusters vCenter

  30. The Master Role • An FDM master monitors: • ESX hosts and Virtual Machine availability. • All Slave hosts. Upon a Slave host failure, protected VMs on that host will be restarted. • The power state of all the protected VMs. Upon failure of a protected VM, the Master will restart it. • An FDM master manages: • The list of hosts that are members of the cluster, updating this list as hosts are added or removed from the cluster. • Thelist of protected VMs. The Master updates this list after each user-initiated power on or power off. FDM FDM ESX 01 ESX 03 FDM FDM ESX 02 ESX 04 vCenter

  31. The Slave Role FDM FDM FDM • A Slave monitors the runtime state of its locally running VMs and forwards any significant state changes to the Master. • It implements vSphere HA features that donot require central coordination, most notably VM Health Monitoring. • It monitors the health of the Master. If theMaster should fail, it participates in the election process for a new master. • Maintains list of powered on VMs. ESX 03 ESX 04 ESX 01 FDM ESX 02 vCenter

  32. The Master Election Process FDM FDM FDM FDM FDM FDM • The following algorithm is used for selecting the master: • The host with access to the greatest number of datastoreswins. • In a tie, the host with the lexicallyhighest moid is chosen. For example moid "host-99" would be higher than moid "host-100" since "9" is greater than "1". ESX 03 ESX 04 ESX 01 ESX 02 vCenter

  33. Agent Communications FDM FDM FDM FDM • Primary agent communications utilize the management network. • All communication is point to point (no broadcast) • Election is conducted using UDP. • Once the Election is complete all further Master to Slave communication is via SSL encrypted TCP. • Each slave maintains a single TCP connection to the master. • Datastores are used as a backup communication channel when a cluster’s management network becomes partitioned. ESX 02 ESX 03 ESX 04 ESX 01 vCenter

  34. Storage-Level Communications FDM FDM FDM FDM • One of the most exciting new features of vSphere HA is its ability to use a storage subsystem for communication. • The datastores used for this are referred to as ‘Heartbeat Datastores’. • This provides for increased communication redundancy. • Heartbeat datastores are used as a communication channel only when the management network is lost - such as in the case of isolation or network partitioning. ESX 02 ESX 03 ESX 04 ESX 01 Useful for VPLEX and stretched clusters vCenter

  35. Storage-Level Communications FDM FDM FDM FDM • Heartbeat Datastores allow a Master to: • Monitor availability of Slave hosts and the VMs running on them. • Determine whether a host has become network isolated rather than network partitioned. • Coordinate with other Masters - since a VM can only be owned by only one master, masters will coordinate VM ownership thru datastore communication. • By default, vCenter will automatically pick2 datastores. These 2 datastores can also be selected by the user. ESX 02 ESX 03 ESX 04 ESX 01 Useful for VPLEX and stretched clusters vCenter

  36. Storage-Level Communications FDM FDM FDM FDM • Host availability can be inferred differently, depending on storage used: • For VMFS datastores, the Master reads the VMFS heartbeat region. • For NFS datastores, the Master monitors a heartbeat file that is periodically touched by the Slaves. • Virtual Machine Availability is reported bya file created by each Slave which lists thepowered on VMs. • Multiple Master Coordination is done by using file locks on the datastore. ESX 02 ESX 03 ESX 04 ESX 01 Useful for VPLEX and stretched clusters vCenter

  37. Storage-RelatedStuff

  38. vSphere 5 – vStorage changes • vStorage APIs for Array Integration (VAAI) – expansion • vStorage Storage APIs for Storage Awareness (VASA) • VMFS-5 • Storage DRS • Storage vMotion enhancements • All Paths Down (APD) and Persistent Device Loss (PDL) • Native Software FCoE initiator • NFS – improvements for scale-out

  39. 1. VAAI Updates

  40. Understanding VAAI a little “lower” • VAAI = vStorageAPIs for Array Integration • A set of APIs to allow ESX to offload functions to storage arrays • In vSphere 4.1, supported on VMware File Systems (VMFS) and Raw Device Mappings (RDM) volumes, • vSphere 5 adds NFS VAAI APIs. • Supported by EMC VNX, CX/NS, VMAX arrays (coming soon to Isilon) • Goals • Remove bottlenecks • Offload expensive data operations to storage arrays • Motivation • Efficiency • Scaling VI3.5 (fsdm) vSphere 4 ( fs3dm - software) vSphere 4.1/5 (hardware) = VAAI Diagram from VMworld 2009 TA3220 – Satyam Vaghani

  41. Growing list of VAAI hardware offloads • vSphere 4.1 • For Block Storage: HW Accelerated Locking HW Accelerated Zero HW Accelerated Copy • For NAS storage: None • vSphere 5 • For Block Storage: Thin Provision Stun Space Reclaim • For NAS storage: Full Clone Extended Stats Space Reservation

  42. Hardware-Accelerated Locking • Without API • Reserves the complete LUN (via SCSI-2 reservation) so that it could update a file-lock • Required several SCSI -2 commands • LUN level locks affect adjacent hosts • With API • Commonly Implemented as Vendor Unique SCSI opcode • Moving to SCSI CAW opcode in vSphere 5 (more standard) • Transfer two 512-byte sectors • Compare first sector to target LBA, if match writes second sector, else returns miscompare

  43. Hardware-Accelerated Zero • Without API • SCSI Write - Many identical small blocks of zeroes moved from host to array for MANY VMware IO operations • Extra zeros can be removed by EMC arrays after the fact by manually initiating “space reclaim” on the entire device • New Guest IO to VMDK is “pre-zeroed” • With API • SCSI Write Same - One giant block of zeroes moved from host to array and repeatedly written • Thin provisioned array skips zero completely (pre “zero reclaim”) • Moving to SCSI UNMAP opcode in vSphere 5 (which will be “more standard”, and will always return blocks to the free pool) SCSI WRITE SAME (0 * times) SCSI WRITE (data) SCSI WRITE (0000…..) SCSI WRITE (data) SCSI WRITE (0000….) SCSI WRITE (data) Repeat MANY times… VMDK

  44. Hardware-Accelerated Copy “let’s Storage VMotion” “let’s Storage VMotion” • Without API • SCSI Read (Data moved from array to host) • SCSI Write (Data moved from host to array) • Repeat • Huge periods of large VMFS level IO, done via millions of small block operations • With API • Subset of SCSI eXtended COPY opcode • Allows copy within or between LUs • Order of magnitude reduction in IO operations • Order of magnitude reduction in array IOps • Use Cases • Storage VMotion • VM Creation from Template SCSI WRITE SCSI WRITE SCSI WRITE ..MANY times… SCSI READ SCSI READ SCSI READ ..MANY times… SCSI EXTENDED COPY “Give me a VM clone/deploy from template”

  45. VAAI in vSphere 4.1 = Big impact http://www.emc.com/collateral/hardware/white-papers/h8115-vmware-vstorage-vmax-wp.pdf

  46. vSphere 5 – Thin Provision Stun Allocate VMFS Allocate VMFS Allocate VMFS • Without API • When a datastore cannot allocate in VMFS because of an exhaustion of free blocks in the LUN pool (in the array) this causes VMs to crash, snapshots to fail, and other badness. • Not a problem with “Thick devices”, as allocation is fixed. • Thin LUNs can fail to deliver a write BEFORE the VMFS is full • Careful management at VMware and Array level needed • With API • Rather than erroring on the write, array reports new error message • On receiving this command, VMs are “stunned”, giving the opportunity to expand the thin pool at the array level. VMFS-5 Extent SCSI WRITE - OK SCSI WRITE - OK SCSI WRITE – ERROR! Thin LUNs ! ! ! Utilization Storage Pool (free blocks) VMDK VMDK VMDK

  47. vSphere 5 – Space Reclamation • Without API • When VMFS deletes a file, the file allocations are returned for use, and in some cases, SCSI WRITE ZERO would zero out the blocks. • If the blocks were zeroed, manual space reclamation at the device layer could help. • With API • Rather of SCSI WRITE ZERO, SCSI UNMAP is used. • The array releases the blocks back to the free pool. • Is used anytime VMFS deletes (svMotion, Delete VM, Delete Snapshot, Delete) • Note that in vSphere 5, SCSI UNMAP is used in many other places where previously SCSI WRITE ZERO would be used, and depends on VMFS-5 CREATE FILE CREATE FILE CREATE FILE CREATE FILE DELETE FILE DELETE FILE VMFS-5 Extent SCSI WRITE - DATA SCSI WRITE - DATA SCSI WRITE - DATA SCSI WRITE - DATA SCSI WRITE - ZERO SCSI UNMAP Utilization Storage Pool (free blocks) VMDK VMDK

  48. vSphere 5 – NFS Full Copy • Without API • Some NFS servers have the ability to create file-level replicas • This feature was not used for VMware operations – which were traditional host-based file copy operations. • Vendors would leverage them via vCenter plugins. An example was EMC exposed this array feature via the Virtual Storage Integrator Plugin Unified Storage Module. • With API • Implemented via NAS vendor plugin, used by vSphere for clone, deploy from template • Uses EMC VNX OE File file version • Somewhat analagous to block XCOPY hardware offload • NOTE – not used during svMotion NFS Mount Extent “let’s clone this VM” “let’s clone this VM” ESX Host File Read File Read File Read ..MANY times… File Write File Write File Write ..MANY times… “Create a copy (snap, clone, version) of the file NFS Server Filesystem FOO-COPY.VMDK FOO.VMDK

  49. vSphere 5 – NFS Extended Stats “just HOW much space does this file take?” “just HOW much space does this file take?” • Without API • Unlike with VMFS, with NFS datastores, vSphere does not control the filesystem itself. • With the vSphere 4.x client – only basic file and filesystem attributes were used • This lead to challenges with managing space when thin VMDKs were used, and administrators had no visibility to thin state and oversubscription of both datastores and VMDKs. • think: with thin LUNs under VMFS, you could at least see details on thin VMDKs) • With API • Implemented via NAS vendor plugin • NFS client reads extended file/filesystem details NFS Mount Extent ESX Host “Filesize = 100GB, but it’s a sparse file and has 24GB of allocations in the filesystem. It is deduped – so it’s only REALLY using 5GB” “Filesize = 100GB” NFS Server Filesystem FOO.VMDK

More Related