1 / 63

VMware Overview – (What’s New) and Virtual Infrastructure Performance, Capacity Planning, and Monitoring

VMware Overview – (What’s New) and Virtual Infrastructure Performance, Capacity Planning, and Monitoring. Jonathan McCormick Feb 12, 2008. Why Virtualization? (Past, Present, Future). Utility Computing via Virtualization. Distributed & Tiered Computing. IT Market Penetration.

diallo
Download Presentation

VMware Overview – (What’s New) and Virtual Infrastructure Performance, Capacity Planning, and Monitoring

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. VMware Overview – (What’s New)andVirtual Infrastructure Performance, Capacity Planning, and Monitoring Jonathan McCormick Feb 12, 2008

  2. Why Virtualization? (Past, Present, Future) Utility Computing via Virtualization Distributed & Tiered Computing IT Market Penetration Mainframes 1970s 1980-90s 2009 • Simple, flexible • Economical • scalability, availability • Scalable, available • Expensive, only for a few critical apps • Affordable, IT everywhere • Sacrificed simplicity, flexibility

  3. Virtualization: Industry-Standard Way of Computing Early Adoption Mainstreaming Standardization Infrastructure Management High Availability Test & Development Server Consolidation Management & Automation Virtual Infrastructure Virtual Infrastructure Hypervisor Hypervisor Hypervisor 1st generation1998 – 2002 2nd generation2003 - 2005 3rd generation2006- 2008

  4. Isolation Partitioning Run multiple virtual machines simultaneously on a single server Each virtual machine is isolated from other virtual machines. Hardware Independence Encapsulation Entire virtual machine is saved in files and can be moved and copied by moving and copying files Run a virtual machine on any server without modification REVIEW: Key Features of Virtualization

  5. Key Benefits of ESX Hypervisor Other Solutions(Hosted) VMware ESX(Hypervisor) • Performance (20~30%) increase over Hosted • Scalability (2x) Memory over subscription • Resource Control Direct hardware control

  6. Centralized Management with VirtualCenter • Provision and boot virtual machines • Monitor system availability and performance • Automated notifications and email alerting • Integrate SDK with existing management tools • Secure the environment with robust access control

  7. Physical to Virtual Migration Seamlessly transform physical systemsinto Virtual Machines with VMware Converter

  8. Strong Storage Foundation: VMFS Production proven • Clustered capabilities available since 2003 • Over 20,000 production customers • Included in cost of VI3 Standard & Enterprise Benefits • Transparent storage cluster management • High performance, optimized for VM access Far more than file storage • Provides locking protocols necessary for robust availability features • VMotion, DRS (+ maintenance mode) • HA • VCB

  9. VMotion™ Technology Changes The Game VMotion lets you move live, running VM’s from one ESX Host to another while maintaining continuous OS and application service availability. • Optimize Utilization • 24/7 HW Maintenance • Better Availability • Uses shared storage • Needs similar CPUs

  10. Resource optimization with VMware DRS • Dynamic and intelligent allocation of hardware resources to ensure optimal alignment between business and IT What is it? • Dynamic balancing of computing resources across resource pools • Intelligent resource allocation based on pre-defined rules Business Impact • Align IT resources with business priorities • Operational simplicity; dramatically increase system administrator productivity Business Demand Resource Pool

  11. Ensure High availability with VMware HA • What is it? • Automatic restart of virtual machines in case of server failure • Customer Impact • Cost effective high availability for all applications • No need for dedicated stand-by hardware • None of the cost and complexity of clustering VMware HA enables cost-effective high availability for all applications X Resource Pool

  12. VMWARECONSOLIDATEDBACKUP VMware Consolidated Backup BACKUP BACKUP BACKUP BACKUP BACKUP • Centralized VM Back-up’s • 20-40% better resource utilization • Pre-integrated with 3rd party backup products

  13. New Enablers for More Effective Management Œ Guided Consolidation (in VirtualCenter) Virtual Desktop Manager Virtualization Platform  DRS with Distributed Power Mgmt Virtual Infrastructure  ESX Server 3i Management & Automation • Guided server consolidation • Integrated virtual desktop management • Energy efficient resource management for a green datacenter • Next generation thin hypervisor integrated into server hardware for rapid deployment

  14. ESX Server 3i • Compact, 32MB footprint • Only architecture with no reliance on a general purpose OS • Integration in hardware eliminates installation • Intuitive wizard driven start up experience dramatically reduces deployment time • Standards-based management of the underlying hardware • Server boot to running virtual machines in minutes • Simplified management • Increased security and reliability

  15. Power on server and boot into hypervisor Configure Admin Password (optional) Modify network configuration Connect VI Client to IP Address Or manage with VirtualCenter  3i    From Server Boot to Running VMs in Minutes

  16. Enabling the ‘Plug-and-Play’ Datacenter • Plug: Power on a new server with ESX Server 3i. The new server joins a DRS cluster. • Play: All VMs in the cluster are automatically rebalanced taking in consideration the newly available resources. • On-demand capacity • Easy scalability

  17. Traditional ESX Server 98% 2% Agent Agent RPM … RHEL3-based Service Console Helpers VMM VMM VMM VMkernel Storage Networking Resource Management HAL and Device Drivers 2 GB Disk Footprint: Disk Footprint: 32 MB 50% Percent of Patches

  18. ESX Server 3i: Thin Virtualization! 98% 2% Agent Agent RPM … RHEL3-based Service Console Helpers VMM VMM VMM VMkernel Storage Networking Resource Management HAL and Device Drivers 2 GB Disk Footprint: Disk Footprint: 32 MB 50% Percent of Patches

  19. Distributed Power Management • Consolidates workloads onto fewer servers when the cluster needs fewer resources • Places unneeded servers in standby mode • Brings servers back online as workload needs increase Resource Pool • Minimizes power consumption while guaranteeing service levels • No disruption or downtime to virtual machines Physical Servers

  20. DPM Savings calculated for a datacenter with 100 physical servers $80,300 $63,093 16,800 hrs 13,200 hrs Assumptions: 50 out of 100 servers can be powered down for 8 hrs/day on weekdays and 16 hrs/day on weekends. Total power consumption per server ( operating power + cooling power) = 1130.625 watts/hr Cost of energy = $ 0.0813 per kWH (source: Energy Information Administration) Distributed Power Management TCO Savings

  21. OFFLINE VMware Update Manager • Automates patch management for ESX Server hosts and select Microsoft and RHEL virtual machines • Scans and remedies online as well as offline virtual machines* and online ESX Server hosts • Snapshots virtual machines prior to patching and allows rollback to snapshot • Eliminates manual tracking of patch levels of ESX Server hosts and virtual machines • Automates enforcement of patch standards • Reduces risk through snapshots and offline virtual machine patching Update Manager Host Server * Note: RHEL guests can only be scanned, not remediated

  22. Update Manager patches entire DRS clusters Each host in the cluster enters DRS maintenance mode, one at a time VMs are migrated off, host is patched & rebooted if required VMs are migrated back on Next host is selected Non-disruptive ESX Server Patching with Update Manager and DRS Update Manager VMotion VMotion Resource Pool • Automates patching of large number of hosts with zero downtime to virtual machines

  23. Guided Consolidation DISCOVER • Automatically discovers physical servers • Analyzes utilization and usage patterns • Converts physical servers to VMs placed intelligently based on user response ANALYZE • Lowers training requirements for new virtualization users • Steers users through the entire consolidation process CONVERT

  24. VDI – Virtual Desktop Manager (VDM) VMware VDM Centralized Virtual Desktops • Enterprise-class, scalable connection broker • Central administration and policy enforcement • Automatic desktop provisioning with optional “smart pooling” • Desktop persistence and secure tunneling options • Microsoft AD integration and optional 2-factor authentication via RSA SecurID® Clients • End-to-end enterprise-class desktop control and manageability • Familiar end user experience • Tightly integrated with VMware’s proven virtualization platform (VI3) • Scalability, security and availability suitable for organizations of all sizes

  25. Manage All Types of Downtime

  26. Storage VMotion • Storage independent migration of virtual machine disks • Zero downtime to virtual machines • LUN independent • Supported for Fibre Channel SANs • Minimizes planned downtime due to storage • Complete planned downtime management solution across servers and storage with VMotion and Storage VMotion

  27. LUN A1 LUN A2 Array A (off lease) Storage VMotion for Storage Array Migration Non disruptively: • Refresh to new arrays • Migrate to different class of storage • VM granularity, LUN Independent LUN B1 LUN B2 Array B (NEW)

  28. Storage VMotion for Storage I/O Optimization • Non-disruptively: • Eliminate virtual machine storage I/O bottlenecks • Move individual virtual machines to best performing LUNs Bottleneck Eliminated I/O Bottleneck LUN 1 Optimized Set LUN 2 LUN 2

  29. Introducing VMware Site Recovery Manager Site Recovery Manager leverages VMware Infrastructure to transform disaster recovery • Simplifies and automates disaster recovery workflows: • Setup, testing, failover, failback • Provides central management of recovery plans from VirtualCenter • Turns manual recovery processes into automated recovery plans • Simplifies integration with 3rd-party storage replication • Makes disaster recovery rapid, reliable, manageable, affordable

  30. Array Replication VMware Site Recovery Manager At A Glance X Protected Site Recovery Site Site Recovery Manager Site Recovery Manager VirtualCenter VirtualCenter Datastore Groups Datastore Groups

  31. Summary of Benefits Site Recovery Manager Leverages VMware Infrastructure to Make Disaster Recovery • Rapid • Automate disaster recovery setup, failover, failback, and testing • Eliminate complexities of traditional recovery • Reliable • Ensure proper execution of recovery plan • Enable easier, more frequent tests • Manageable • Centrally manage recovery plans • Make plans dynamic to match environment • Affordable • Utilize recovery site infrastructure • Reduce management costs These features are representative of feature areas under development.  Feature commitments must not be included in contracts, purchase orders, or sales agreements of any kind.  Technical feasibility and market demand will affect final delivery.

  32. VMware Virtual Infrastructure Industry-Standard Way of Computing Most effective way to manage IT infrastructure Mainframe-class reliability and availability Platform for any OS, hardware, application …always on… …infrastructure The automated…

  33. VMwarePerformance Considerations

  34. Management Agents and Interfaces Hostd VMX VMX Other Peripheral I/O VMM UserWorlds POSIX API VMM VMkernel Service Console Storage Stack Network Stack ResourceManagement Device Drivers Hardware ESX 3.0 Architecture

  35. Other Architectures Large general purpose OS in parent partition or Dom 0 opens security and reliability risks All I/O driver traffic going thru parent OS is a bottleneck Virtual Virtual Virtual Virtual Virtual Virtual Machine Machine Machine Machine Machine Machine Drivers Drivers Drivers Drivers Drivers Drivers Drivers ESX Server 3i Contrast with Other Architectures Virtual Virtual Virtual Virtual General Purpose OS Dom0 (Linux) Machine Machine Machine Machine or Parent VM (Windows) Drivers Drivers Drivers Drivers Drivers Drivers Competitive Hypervisors Xen/Viridian ESX Server 3i Ultra small, virtualization centric kernel Direct driver model optimized for VMs Management VMs Remote CLI, CIM, VI API

  36. Virtualization Overhead Sources • Virtualization impacts various system components • CPU: Some instructions require special handling • Memory: Space for virtualization layer and additional page management tasks • Devices: Virtualization layer controls physical devices and shows guest OS standardized view • Resource management: Manages allocation of physical resources to VMs • Virtualization overhead depends on how workloads use these components

  37. CPU Performance • CPU virtualization adds varying amounts of overhead • Little or no overhead for the part of the workload that can run in direct execution (CPU Rings 1+) • Small to significant overhead for virtualizing sensitive privileged instructions (CPU Ring 0) done via Binary Translation or CPU offload (VT) • Performance reduction vs. increase in CPU utilization • CPU-bound applications: any CPU virtualization overhead results in reduced throughput • non-CPU-bound applications: should expect similar throughput at higher CPU utilization

  38. ESX Server CPU Performance • Some multi-threaded apps in a SMP VM may not perform well • Use multiple UP VMs on a multi-CPU physical machine • ESX Server

  39. CPU Performance • ESX 3 supports four virtual processors per VM • Use UP VMs for single-threaded applications • Use UP HAL or UP kernel • For SMP VMs, configure only as many VCPUs as needed • Unused VCPUs in SMP VMs: • Impose unnecessary scheduling constraints on ESX Server • Waste system resources (idle looping, process migrations, etc.)

  40. Memory Performance • Page tables • ESX cannot use guest page tables • ESX Server maintains shadow page tables • Translate memory addresses from virtual to machine • Per process, per VCPU • VMM maintains physical (per VM) to machine maps • No overhead from “ordinary” memory references • Overhead • Page table initialization and updates • Guest OS context switching VA PA MA

  41. Memory Performance • ESX memory space overhead • Service Console: 272 MB (ESX3i = no service console) • VMkernel: 100 MB+ (ESX3i = 24MB) • Per-VM memory space overhead increases with: • Number of VCPUs • Size of guest memory • 32 or 64 bit guest OS • ESX memory space reclamation • Page sharing • Ballooning

  42. Memory Performance • Avoid high active host memory over-commitment • Total memory demand = active working sets of all VMs + memory overhead – page sharing • No ESX swapping: total memory demand < physical memory • Right-size guest memory • Define adequate guest memory to avoid guest swapping • Per-VM memory space overhead grows with guest memory

  43. Networking Performance • Check configuration • Ensure host NICs are running with intended speed and duplex • NIC teaming distributes networking load across multiple NICs • Better throughput and allows passive failover • Use separate NICs to avoid contention • For Console OS (host management traffic), VMKernel (vmotion, iSCSI, NFS traffic), and VM • For VMs running heavy networking workloads • Tune VM-to-VM networking on same host • Use same virtual switch to connect communicating VMs • Avoid buffer overflow: Tune receive/transmit buffers (KB 1428)

  44. Networking Performance • Ensure adequate CPU resources are available • Heavy gigabit networking loads are CPU-intensive • Both natively and virtualized • Use vmxnet virtual device in guest • Default guest vNIC is vlance, but vmxnet performs better • For vmxnet driver install tools • e1000 is the default for 64-bit guests

  45. Install VMware Tools • vmxnet – high speed networking driver • Memory balloon driver • Improved graphics – mks, screen resolution • Idler program – deschedule Netware guests when idle • Timer sponge for correct accounting of time • Experimental, manually started • www.vmware.com/pdf/vi3_esx_vmdesched.pdf • Time Sync – syncs time with the host every minute • Manually started (KB 1318)

  46. Storage Performance • Choose Fibre Channel SAN for best performance • Set LUN queue depth appropriately (KB 1267) • Networked storage best practices (NFS, iSCSI) • Ensure sufficient CPU for software-initiated iSCSI and NFS • Avoid link oversubscription • Ensure consistent configuration across the full network path • Use multiple mount points with multiple VMs

  47. ESX Server HBA1 HBA2 HBA3 HBA4 Storage array 1 2 3 4 FC Switch SP1 SP2 Storage Performance • Hardware configuration affects storage performance • Consult SAN Configuration Guides • Ensure caching is enabled • Consider tuning layout of LUNs across RAID sets • Spread I/O requests across available paths

  48. Storage Performance • Creating partitions • Use VirtualCenter • Align partitions in the guest as well • Non-trivial to use command line tools • www.vmware.com/pdf/esx3_partition_align.pdf • RDM vs. VMFS • VMFS has low overhead – reduced complexity • RDM has dedicated I/O queue – increased complexity • VMFS is a distributed file system • Avoid operations that require excessive metadata updates • VM Configuration • Choose placement of data disks and swap files on LUNs appropriately • RAID type, spindles available, concurrent access of LUNs etc. • Increase VM’s max outstanding disk requests if needed (KB 1268)

  49. Dynamically Allocate System Resources • Monitor system resource utilization across hosts • Allocate resources intelligently based on rules defined by user

  50. D W Size depicts High, Normal or Low shares W D W J I J I Non-DRS Cluster DRS Cluster D W D W Web server load balanced across hosts to satisfy share settings W J I J I DRS – Global Scheduler W – web server D – database J – java app server I – idle Host1 Host2

More Related