1 / 0

VMworld 2010 wrap-up Focus on the ‘Cloud’ I tuned the term out, but think dynamic IT infrastructure

VMworld 2010 wrap-up Focus on the ‘Cloud’ I tuned the term out, but think dynamic IT infrastructure

austin
Download Presentation

VMworld 2010 wrap-up Focus on the ‘Cloud’ I tuned the term out, but think dynamic IT infrastructure

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. VMworld 2010 wrap-up Focus on the ‘Cloud’ I tuned the term out, but think dynamic IT infrastructure Unlike 2009 with lots of new products/releases, 2010 was more a year of working on the existing products (vSphere 4.5 a no-show, etc. And significant enhancements pushed out to ESX/vSphere 5 in 2011) Signs of challenge of code and product complexity, integration……. Dinner with the CTO, talk direct with Product Managers getting away from vSphere 4.0 kernel enhancements for specific vendors (Cisco, EMC, etc) and move to enhance APIs for broader usage Acknowledgement that APIs needed work vSphere 4.1/View 4.5 – enhancing scalability and tightening up the products (details in following slides) In Las Vegas for 2011!
  2. The following slides are a subset of a Trace|3 presentationgraciously provided by David Hekimian, Virtualization Practice Manager, Trace|3 For the full slide set, which includes Building a 50,000 Seat VMware View 4.5 Deployment: A Collaboration by Cisco, VMware, NetApp, Fujitsu and WYSE NetApp and Vmware integration Thin Client Overview - Wyse Best Practices for a VMware View Proof of Concept and Implementation Presented by: John Dodge, World Wide Practice Director for Virtual Desktop at VMWare Go to http://www.trace3.com/register/vmware/thankyou.php Or contact your local Trace|3 representative http://www.trace3.com/contact.php San Diego505 Lomas Santa Fe DriveSuite 270Solana Beach, CA 92075p: 858 345-2650 f: 949 333-2400
  3. VMware View Road ShowSan Diego Sept 29 2010 Estancia Hotel in La Jolla

  4. What’s new with vSphere 4.1?
  5. Application Services VMware vSphere 4.1: What’s New? Infrastructure Services Update Manager Enhancements Virtual Serial Port Concentrator vCenter Server (64-bit) HA Diagnostics And Healthcheck vMotion Speed and Scale AD Integration (host) More VMs (per cluster, DC) More Hosts (per VC, DC) Security Scalability Availability VMware vSphere 4.1 vNetwork vCompute vStorage Memory Compression Host Affinity Network I/O Control Load Based Teaming IPv6 NIST Performance + Storage I/O Control More Performance Metrics APIs for Array Integration
  6. vSphere 4.1 Delivers “Cloud Scale” 3,000 VMs / cluster (2x) 500 hosts / vCenter (5x) 10,000 VMs / vCenter (3x) 99% of VMware’s 170K Customers Can Run Their Entire Datacenter in a Single VMware Cluster*
  7. Enhanced Scalability Defined
  8. Migration to ESXi with vSphere 4.1 Recommended that all vSphere 4.1 deployments use the ESXi Hypervisor vSphere ESXi 4.1 Fully Supports Boot From SAN for FC, iSCSI and FCoE vSphere 4.1 is the last release with the ESX hypervisor (ESX with Service Console) Visit the ESXi Upgrade Center - http://www.vmware.com/products/vsphere/esxi-upgrade/
  9. ESXi to ESX Info Center
  10. vCenter Server – Migration to 64-bit vCenter Server MUST be hosted on 64-bit Windows OS 32-bit OS NOT supported as a host OS with vCentervSphere 4.1 Why the change? Scalability is restricted by the x86 32 bit virtual address space and moving to 64 bit will eliminate this problem Reduces Dev and QA cycles and resources (faster time to market) Two Options vCenter Server in a virtual machine running 64-bit Windows OS vCenter Server install on a 64-bit Windows OS Best Practice – Use Option 1
  11. Storage I/O Control Description Set storage quality of service priorities per virtual machine Beta Feedback “I really feel that the Storage I/O Control is a must have for our environment and we should move forward without delay.” Proof Point Basic - Make storage access rights equal between VMs Advanced - Prioritize use of storage (similar to how compute is prioritized with vSphere) per VM Business priorities now define low and high priority storage resource access Create the “high speed” or HOV lane for VMs Benefits 1. All VMs created equal 2. Make Your Mission Critical VMs VIPs Guarantee service levels for access to storageresources
  12. Storage I/O Control (SIOC) CPU shares: Low Memory shares: Low CPU shares: High Memory shares: High CPU shares: High Memory shares: High I/O shares: High I/O shares: High I/O shares: Low 32GHz 16GB MicrosoftExchange online store data mining Datastore A
  13. Storage Performance Reporting Description Benefits Deliver of key storage performance statistics in vCenter Beta Feedback “In the monitoring area, the enhanced storage statistics are very useful” Proof Point Granular storage reporting for improved tuning and troubleshooting performance Independent of storage architectures and protocols Real-Time and Historical Trending for Storage
  14. Network I/O Control Benefits Description Basic - Make network access rights equal between flow types Advanced - Prioritized use of network, especially in 10 Gbit environments Business priorities now define low and high priority network resource access as needed Create the “high speed” or HOV lane for VMs Set network quality of service priorities per flow type (iSCSI, NFS, etc.) Beta Feedback “The new Network I/O control feature is very interesting for consolidating network links with 10Gbit.” Proof Point iSCSI FT vMotion NFS TCP/IP Distributed Switch Guarantee service levels for access to network resources 10 GigE
  15. vMotion Performance and Scale Enhancements Benefits Description Adding “Cloud Scale” to online virtual machine migration (a VMware key differentiator) Beta Feedback “This release product has some nice benefits in particular increased vMotion capabilities.” Proof Point Performance and Scalability More Live Migrations in Parallel (up to 8 per host pair) Elapsed time reduced by >4.5x on 10GbE tests 5x faster with the 4.1 platform release
  16. Memory Compression Description A new hierarchy for VMware’s memory overcommit technology (a VMware key differentiator) Beta Feedback “Great for memory over-subscription.” Proof Point Benefits Optimized use of memory Safeguard for using memory overcommit feature with confidence Performance OS 1,000x faster than swap-in! Hypervisor
  17. VMs A  Servers A Only VMs B  Servers B Only DRS Host Affinity Description Benefits Tune environment according to availability, performance, and/or licensing requirements Cloud enablement Set granular policies that define only certain virtual machine Movements Beta Feedback “Awesome, we can separate VMs between data centers or blade enclosures with DRS host affinity rules” Proof Point Mandatory Compliance Enforcement for Virtual Machines
  18. HA Enhancements Description Healthcheck status Operational window Optimized interaction with DRS Application-Aware API Beta Feedback “Major improvements in DRS!” Proof Point Event or alarms when configuration rules are broken No click status (cluster status available at all times) Move VMs to the Best Host Available Application awareness (with supported solution) Benefits Adding Another “9” to Availability
  19. Fault Tolerance (FT) Enhancements DRS FT fully integrated with DRS DRS load balances FT Primary and Secondary VMs. EVC required. Versioning control lifts requirement on ESX build consistency Primary VM can run on host with a different build # as Secondary VM. Events for Primary VM vs. Secondary VM differentiated Events logged/stored differently. FT PrimaryVM FT SecondaryVM Resource Pool
  20. vStorage APIs for Array Integration (VAAI) vStorage APIs for Array Integration VMware vSphere Storage vMotion Provision VMs From Template Improve Thin Provisioning Disk Performance VMFS Share Storage Pool Scalability
  21. Storage vMotion with Array Full Copy Function Benefits Zero-downtime migration Eases array maintenance, tiering, load balancing, upgrades, space mgmt Challenges Performance impact on host, array, network Long migration time (0.5 - 2.5 hrs for 100GB VM) Best practice: use infrequently Improved solution Use array’s native copy/clone functionality
  22. VAAI Speeds Up Storage vMotion - Example 42:27 - 39:12 = 2 Min 21 sec w/out (141 seconds) 33:04 - 32:37 = 27 Sec with VAAI 141 sec vs. 27 sec
  23. VM Provisioning from Template with Full Copy Benefits Reduce installation time Standardize to ensure efficient management, protection & control Challenges Requires a full data copy 100 GB template (10 GB to copy): 5-20 minutes FT requires additional zeroing of blocks Improved Solution Use array’s native copy/clone & zeroing functions Up to 10-20x speedup in provisioning time
  24. Copying Data – Optimized Cloning with VAAI VMFS directs storage to move data directly Much less time! Up to 95% reduction Dramatic reduction in load on: Servers Network Storage
  25. Scalable Lock Management A number of VMFS operations cause the LUN to temporarily become locked for exclusive write use by one of the ESX nodes, including: Moving a VM with vMotion Creating a new VM or deploying a VM from a template Powering a VM on or off Creating a template Creating or deleting a file, including snapshots A new VAAI feature, Hardware Assisted Locking (atomic test and set) allows the vSphere host to offload the management of the required locks to the storage and avoids locking the entire VMFS file system.
  26. VMFS Scalability with Hardware Assisted Locking Makes VMFS more scalable overall, by offloading block locking mechanism Using Atomic Test and Set (ATS) capability provides an alternate option to use of SCSI reservations to protect the VMFS metadata from being written to by two separate ESX Servers at one time. Normal VMware Locking (No ATS) Enhanced VMware Locking (With ATS)
  27. For more details on VAAI vSphere 4.1 Documentation also describes use of this features in the ESX Configuration Guide Chapter 9 (pages 124 - 125) Listed in TOC as “Storage Hardware Acceleration” Three setting under advanced settings: DataMover.HardwareAcceleratedMove - Full Copy DataMover.HardwareAcceleratedInit - Block Zeroing VMFS3.HarwareAcceleratedLocking - Hardware Assisted Locking Will only support block based storage in 4.1 NetApp Integration with VMware vStorageAPIshttp://media.netapp.com/documents/wp-7106.pdf
  28. What’s new with View 4.5?
  29. Deliver desktops as a managed service Platform VMware vSphere for desktops Management VMware View Manager Vmware View Composer VMware ThinApp User Experience PCoIP Print Multi-monitor display Multimedia USB redirection
  30. Components of Desktop as a Managed Service Reduce IT Costs UserExperience View Client PCoIP Protocol Local Mode Usability Flexibility View Manager View Composer ThinApp Simplicity Efficiency Security Management Availability Reliability Scalability Platform vSphere forDesktops
  31. PCoIP Improvement and Changes Smart Card Support & Online Certificate Status Protocol (OCSP) Certificate Revocation Location Based Printing & Awareness Custom display topology with zero clients FIPS140-2 compliance application Port Change to 4172 Improved WAN Performance Details on next slide
  32. PCoIP WAN Improvements in View 4.5 Four changes added to View 4.5 compared to improve WAN performance: Improved image quality management Improved network bandwidth estimation Improved out of order packet resilience Imaging selective packet retransmit Anticipated Impacts on WAN Experience Higher image quality without impacting bandwidth consumption Improved imaging performance in low bandwidth situations Improved performance when sharing network with multiple PCoIP sessions or other TCP traffic No service degradation when tested with Juniper, F5, OpenVPN, and Cisco SSLVPN solutions
  33. Why PCoIP protocol does well on WAN PCoIP protocol uses host-side rendering to avoid client redirection latency limitations PCoIP protocol uses UDP to transfer real-time audio and image data optimally (no resending of stale packets required) PCoIP protocol dynamically adjusts image quality and frame rate based on available bandwidth PCoIP protocol is able to use lossy compression on images and audio when network is constrained
  34. Planning for PCoIP Plan for 200-250kbps average bandwidth for a typical basic office productivity desktop Plan for 500kbps - 1 Mbps minimum peak bandwidth to provide headroom for bursts of display changes Plan for 1Mbps per simultaneous user running 480p video Plan for less than 70-80% network utilization Assumptions are based on 8-10 hours continuously usage, Bandwidth estimation (250Kbps/0.80) * 5 = 1.5Mbps T1
  35. Win7 OS Support – Guest and Client Supporting Windows 7 32-bit and 64-bit as a client and remote desktop Virtual desktop related improvements Jumplist integration GPO PowerShell 2.0 cmdlets Location-aware printing RDP7 True multi-monitor and Multimedia redirection support MMR is enabled by default and cannot be disabled Aero support for RDP7 client
  36. Mobility & Bring Your Own Computer View Client with Local Mode (Type 2 Hypervisor) Virtual desktop is checked-out to local endpoint and encrypted Access desktop, applications and data regardless of network availability Changes are checked in tothe datacenter when online Extend IT security policies to local desktop View Manager
  37. What’s changed since experimental Offline VDI?
  38. Local Mode Administration All local desktops: Require user authentication to run Are completely encrypted Must periodically “heartbeat” with View Connection Server for management Policies affecting local desktops: Can the desktop be used locally? How long can the local desktop go without server contact? What part of the local desktop should be replicated? (Linked clone desktops only) How often should the desktop be replicated? Is the user allowed to initiate replication? Check in? Rollback? Rollback Discard a local desktop and make server side desktop live Initiate Replication Schedule a one-off replication on next client contact
  39. Smart Card Authentication Support PCoIP and RDP Revoked certificates may be published through OCSP / CRL Cached and encrypted PIN entry for Local Mode smart card logon Storage of multiple credentials for public key infrastructure (PKI), one-time password (OTP), and static passwords on a single authentication device Support for leading smart card manufacturers, remote access solutions, thin clients, and productivity applications Capability to establish specific policies for certificates, PIN management and notification Support for smart card standards direct SSO
  40. Components of Desktop as a Managed Service Reduce IT Costs UserExperience View Client PCoIP Protocol Local Mode Usability Flexibility View Manager View Composer ThinApp Simplicity Efficiency Security Management Availability Reliability Scalability Platform vSphere forDesktops
  41. Scalability Broker Level Broker Pod and Teaming Federated Pool Management Floating & Dedicated Pools Non-Persistent Pool Refresh & Re-Compose View Composer Tiered Storage Local Disk Storage Support Disposable Disks “All Users” directory is no longer copied during customization Thin-Provisioned Disks
  42. Admin Enhancements – You Asked For View Manager Admin UI ported to Adobe Flex Dashboard View Reporting DB Delegated Admin Security Server Setup Improvement Desktop Administrator User location and Filters Individual VM view - Correlation with vCenter
  43. Integrated Dashboard UI
  44. System Auditing and Monitoring
  45. Federated Pool Management Feature introduction Total number of VMs can be architected by View Manager(s) up to 10,000 (per vCenter) Number of brokers: 5 + 2 (redundancy) DMZ Security server: 1 + 1 (redundancy) External/internal users: 20/80 Concurrent online users: 100% Maximum sustained rate of logons: 5 per second across all brokers Concepts Behind the Feature Management was the big cost when scaling. Federated Pool Management: Each VM managed by one broker only Adding brokers does not increase management load.
  46. View Connection Servers (View Manager)
  47. View Security Server Pairing CentralizedVirtual Desktops Enable security servers to be automatically paired with their broker at installation time Enable configuration changes made on the Broker to be propagated to the security server RDP MicrosoftActive Directory vCenter View Connection Server SSO Teaming Security Server(s) View Client
  48. Transfer Server and Transfer Server Repository The Transfer Server is a new View server role Required for checking out desktop if you plan to use Local Mode Installed in a VM with access to the datastores containing the desktop VMs Windows 2003 and 2008 32bit / 64bit Requires LSI parallel disk controller Stateless without UI using JMS Managed by View Manager Contains an Apache installation as client facing interface to read and write desktop data Multiple Transfer Servers can be used for scalability Transfer Server Repository is a customer supplied UNC file share View Composer based pools requires Transfer Server for local mode Faster checkout Local path to Transfer Server itself A network share accessible to one or more Transfer Servers
  49. ThinApp Management Feature introduction Associate ThinApp assignment and delivery at desktop pool level Once entitled, it supports visibility into ThinApp status on desktops Event auditing Benefits Integrates View and ThinAppfunctionality Providing ease of management and delivery capabilities Dashboard overview of current ThinApps in a client environment
  50. View Composer
  51. View Composer Updated Features Support for SysPrep Refresh, Recompose and Rebalance for Non-Persistent Pools Tiered Storage Support Persistent Disk Management Detach/Reattach/Archive
  52. Sysprep Support Feature introduction Support Sysprep and Quickprep for linked clone guest VMs Why Sysprep Supported by Microsoft as the only customization method Generates each VM with unique SID Some software (NAC, AV etc) might requires unique SID for licensing control Restrictions Once a pool is configured either Sysprep or Quickprep, cannot be changed Sysprep is only supported if the pool is using vSphere mode (homogeneous clusters of 4.0 or higher ESX servers) Recompose will generate a new SID for a VM created with Sysprep (Use with cautions)
  53. Sysprep Support
  54. Sysprep Support : Installation and Configuration Installation Install Sysprep on the vCenter server Sysprep functionality is built into the Vista or Win7 OS View Manager Pool must be configured to use vSphere mode Linked clone master image View agent with View Composer option must be installed The master image does not need to be joined to the domain For Win7, Volume Licensing must be configured (Microsoft Key Management System server or Multiple Activation Key) Other The domain controller must be reachable from all deployed clones
  55. View Composer Storage Savings View Composer / View achieves storage cost reduction through: Allow storage over-commit Uses delta disks for OS disks and thin provisions user data disks Control the growth of storage via rebalance
  56. Persistent Disk Management Persistent disks are now the first class objects by View Manager Why? When VMs in dedicated pools were deleted, user data could be lost if the persistent disks were deleted Administrators have options to save user data disks (persistent disks) and manage them Restrictions Recreate desktop can only be done with vSphere pools Attach Persistent Disk can only be attached to VMs in vSphere pools By default disks are archived at the root level of the same datastorethey are in. This can be changed if only a single disk is archived.
  57. Persistent Disk Management: Screen shots Attached Disks
  58. Components of Desktop as a Managed Service Reduce IT Costs UserExperience View Client PCoIP Protocol Offline Usability Flexibility View Manager View Composer ThinApp Simplicity Efficiency Security Management Availability Reliability Scalability Platform vSphere forDesktops
  59. Optimized Cloud Infrastructure Platform Scalability: Built for the largest desktop environments 1000s of VMs/pod Faster and more efficient vMotion leading to decreased migration time for VMs Shrink and grow desktops based on demand and priority Dynamic Resource Allocation High Performance Optimized for desktop workloads Performance acceleration due to lower memory swapping Best Density Increased desktop VM density – 16-20 VMs/core High Availability and Business Continuity
  60. Simplified AV with vShield Endpoint Improve performance and effectiveness of existing endpoint security solutions Offload AV activity to Security VM (SVM) Eliminate desktop agents and AV storms Enable comprehensive desktop VM protection Centrally manage AV service across VMs with detailed logging of AV activity Partner Integration through EPSEC API - Trend Micro VM SVM VM VM Persona Persona Persona APP APP APP AV OS OS OS OS Kernel Kernel Kernel BIOS BIOS BIOS Hardened VMware vSphere Introspection
  61. Tiered Storage What is tiered storage? Place replicas on a single datastore separate from linked clones The replicas can be shared by all linked clones Why SSD for Replicas? Use high-performance solid state disks (SSDs) to create replicas Dynamically improve performance of linked clones Notes vSphere mode only (All ESX servers are 4.0 or higher) Only a single datastore can be selected for replicas The datastore for replicas must be connected to all ESX hosts in the cluster Use with caution – as the replica datastore creates a single point of failure
  62. Tiered Storage
  63. Tiered Storage: Administration The datastore used for replicas can be changed - it will only affect Newly created VM Recomposed VM Rebalanced VM If a separate datastore for replicas is de-selected and cannot be found The new VMs, recomposed, or rebalanced VMs will use the OS datastores for replicas Base Images Replicas Linked Clone DT OS/Delta 1 Replicas View 4.5: Delta Training – Revision 1.0
  64. View Composer Feature: Disposable Disk What is disposable disk? Windows and Windows applications writes temporary/paging files to disk These updates are usually deleted after use and space be reused by guest OS Why? Before, the volume couldn’t be reclaimed by VMFS When refreshed on linked clone, it caused persistent data in C:\ drive lost Benefits: Provides a zero impact to user and lightweight method to reclaim disk space of OS paging file and temporary files.
  65. View Composer: Disposable Disk Disposable Disk Redirect paging and system temp files to a temporary disk removed upon VM powered off Floating View Composer Desktop Dedicated View Composer Desktop
  66. Extensibility with Location Based Printing Leverage ThinPrintAutoConnectdll communicates over a virtual port with a ThinPrint .print Client .print client query locally connected printers and network printers Filter out certain network printers based on the location of the Client Host Integrated via GPO editor in .ADM directory
  67. Extensibility with PowerShell Provides a series of PowerShell cmdlets to administer View from the command-line (PowerCLI) Allows management of: VI server entries View Licenses Global Config Remote and Local Desktop Sessions Desktops/Pools VMs and Physical Machines (running the Agent) Entitlements Why? Allows for automation and scripting Provides extensibility to Administration tasks Seamless integration from View to vCenter ## Linked Clone operations accept individual machine ids. ## The below commands can be used to cover all the VMs in a pool. Get-DesktopVM -pool_id <id> | Send-LinkedCloneRebalance -schedule (Get-Date) Get-DesktopVM -pool_id <id> | Send-LinkedCloneRefresh -schedule (Get-Date) Get-DesktopVM -pool_id <id> | Send-LinkedCloneRecompose -schedule (Get-Date) -parentVMPath <path to new VMfs>
  68. Extensibility in Core Broker View Framework SDK A backend consolidation that extends vdmadmin PowerShell cmdlets defined on the .NET bridge SCOM support
  69. Extensibility with Kiosk Mode Locked down View Access Client device ID based provisioning and auto-logon Automatic generation of ClientIDbased user account in AD Kiosk-ready View Client Suppression of GUI features Error reporting for script integration Automated USB redirection Client info support for in-guest printer mapping ThinPrint GPO enabled for location based printing Use cases: airport check-in, library, amusement park event kiosk, registration desk, ticketing…
  70. Extensibility with GPO Templates Control View components behaviors domain-wide Configure location-based printing Creating an OU for View desktops vdm_agent.adm (allow protocol access, SSO, run commands etc) vdm_client.adm (pass endpoint client information to agent etc) vdm_server.adm (performance and log configuration etc) vdm_common.adm (common configuration) pcoip.adm (limit peak bandwidth)
  71. Questions?
More Related