VMware 2007. What’s New? - PowerPoint PPT Presentation

vmware 2007 what s new n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
VMware 2007. What’s New? PowerPoint Presentation
Download Presentation
VMware 2007. What’s New?

play fullscreen
1 / 74
VMware 2007. What’s New?
190 Views
Download Presentation
dane
Download Presentation

VMware 2007. What’s New?

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. VMware 2007. What’s New? Vsevolod (Seva) Semouchin Technical Account Manager 2005.05.25

  2. Agenda • Vision – Enterprise Software Appliance • Virtualization and Industry Trends • VMware Lab Manager

  3. What the Virtualization is About ? • It is About Application Provisioning

  4. Enterprise Software Vision

  5. Patching Provisioning • Difficult to plan, install and configure – time to value is slow… • Misconfigured applications cause performance issues • Responsible for installing OS and application patches – tedious and expensive • Patches may not always work… Management • Responsible for finding third-party tools for backup, HA, Load Balancing, Change management, etc. • Tools don’t always support apps State of Enterprise Software – Customer Perspective

  6. State of Enterprise Software – ISV Perspective Developing/Testing Support • Support many operating systems and patch levels • Worry about OS idiosyncrasies – code complexity • Almost 50% of support calls during install/ configure cycle • Customer config very different than expected– performance issues Management • Patching of OS may break application • Backup, HA, Load balancing tools untested, unsupported and could break app

  7. Ready-to-Go Apps “VAM” VMware Infrastructure Enterprise Software 2.0 – A New Paradigm Ready-to-Go Music iPod

  8. OS jobs 1. Drive and manage hardware 2. Export better abstraction OS is viewed as anextension of hardware Privileged position – Only one OS Operating System Operating System Operating System The Operating System – Traditional View

  9. APP APP APP APP APP Application Support Application Support APP APP APP APP APP APP APP Application Support Application Support APP APP APP APP APP APP APP APP APP Application Support Application Support Modern OS Evolution • Goal— Support as many applications as possible • Problems— Too complex • Security • Reliability • Manageability • Performance • Innovation Application Support Hardware Mgmt.

  10. Virtualization Layer Application Operating System Virtual Appliance Operating System • Don’t need complex hardware management • Don’t need broad application support • Application-specific operating system - JeOS • Look at hardware appliance operating systems for examples Application Application Support Hardware Mgmt.

  11. Virtual Appliance Marketplace • Launched in November 2006 • ~ 425 Virtual Appliances Available • 2 downloads/min off VAM • Certification program in place • Technical Content for ISVs

  12. VMware Infrastructure Resource Pool VMware Infrastructure • By deploying virtual appliances on VMware Infrastructure, customers instantaneously gain: • VMware HA • VMware DRS • VMware Consolidated Backup • VMotion

  13. Speed up time to market Increase quality Reduce code complexity Reduce support costs Reduce sales cycles Enhance performance Reduce time to value One throat to choke Reduce patching headaches Simplify IT management Provide high-value services for all apps Benefits ISVs Customers

  14. Getting There… • Virtual Lifecycle Management Tools • Just Enough OS (JeOS) • Licensing Evolution • Standards

  15. Package, Distribute Develop Deploy Manage Virtual Appliance Lifecycle

  16. Developing a Virtual Appliance • VMware ACE Management Server will be available as a virtual appliance • Appliance consists of: • Just enough OS (JeOS): Debian-based • Apache web server • Mgmt UIs to configure appliance on first boot and thereafter • Patching module to check and download patches • 20 MB zipped • Fully supported by VMware

  17. BEAJVM BEAJVM BEAJVM x86Server x86Server x86Server Virtual Infrastructure BEA Liquid VM BEA JVM Operating System Hardware • Native performance • 50% less memory consumption

  18. Deploying ACE Mgmt Server VAM Configure VA on first boot • Networking • Patching schedule • Database connections • Admin users VMware Infrastructure Physical Infrastructure (x86)

  19. Patching ACE Mgmt Server www.vmware.com • If security fix is required for JeOS, VMware downloads fix from OSV • VMware rolls up OS and app fixes, tests and uploads patch • Appliance checks www.vmware.com for updates • If patch available, appliance downloads it automatically • Admin installs patch or appliance patches itself OS and AppPatches ACE MS Virtual Infrastructure CPU Pool Memory Pool Storage Pool Interconnect Pool

  20. Predictions – 3 years ahead • Significant amount of software will be distributed as virtual appliances • OSVs will provide JeOS versions • Data centers will have a wide variety of JeOSes deployed • Ecosystem around virtual appliances will evolve rapidly • License definition and enforcement will evolve significantly to support virtualization • Standards in virtualization will be implemented

  21. Brave New World • Sysadmin in 2010 • Get applications deployment plan from business unit • Download necessary virtual appliances • Uses VirtualCenter to deploy necessary amount of appliance instances • Uses VirtualCenter to connect different virtual appliances to VirtualSwitches

  22. Virtualization and Industry Trends

  23. Third-party Solutions Management and Distributed Virtualization Services VM VM VM VM VMotion Provisioning Backup DistributedVirtual Machine File System Virtual NIC andSwitch VirtualCenter Enterprise Class Virtualization Functionality DRS ResourceManagement CPU Scheduling Memory Scheduling Storage Bandwidth Network Bandwidth DAS Distributed Services Background context: the full virtualization software stack Third- Party Agents SDK / VirtualCenter Agent VMX VMX VMX VMX Virtual Machine Monitor I/O Stack VMM VMM VMM VMM Service Console Device Drivers Storage Stack Network Stack Device Drivers ESX Server VMkernel Hardware Interface Hardware

  24. Virtualization Software Technology • Virtual Machine Monitor (VMM) • SW component that implements virtual machine hardware abstraction • Responsible for running the guest OS • Hypervisor • Software responsible for hosting and managing virtual machines • Run directly on the hardware • Functionality varies greatly with architecture and implementation VMM VMM VMM Enhanced Functionality Base Functionality (e.g. scheduling) Hypervisor

  25. Background: virtualizing the whole system • Three components to classical virtualization techniques • Many virtualization technologies focus on handling privileged instructions

  26. CPU Virtualization • Three components to classical virtualization techniques • Many virtualization technologies focus on handling privileged instructions

  27. What are privileged instructions and how are they traditionally handled? Apps • In traditional OS’s (e.g. Windows) • OS runs in privileged mode • OS exclusively “owns” the CPU hardware & can use privileged instructions to access the CPU hardware • Application code has less privilege • In mainframes/traditional VMM’s • VMM needs highest privilege level for isolation and performance • Use either “ring compression” or “de-privileging” technique • Run privileged guest OS code at user-level • Privileged instructions trap, and are emulated by VMM • This way, the guest OS does NOT directly access the underlying hardware Ring 3 Guest OS Ring 0 Apps Ring 3 Guest OS VMM Ring 0

  28. Handling Privileged Instructions for x86 • De-privileging not possible with x86! • Some privileged instructions have different semantics at user-level: “non-virtualizable instructions” • VMware uses direct execution and binary translation (BT) • BT for handling privileged code • Direct execution of user-level code for performance • Any unmodified x86 OS can run in virtual machine • Virtual machine monitor lives in the guest address space

  29. x86 Virtualizability • Some x86 instructions have different semantics at different privilege levels! • Eflags register contains a mixture of condition codes and privilege state (IF flag) • POPF instruction • userlevel accesses silently fail to write IF • supervisor accesses succeed in changing IF • This eliminates the “trap and emulate” approach to virtualization

  30. Monitor Execution Engine Direct Exec VM enters unprivileged mode Faults, system calls C code Resume execution of privileged guest code Faults, callouts BT

  31. Monitor Execution Engine • Direct Execution • Allow guest code to run directly on hardware • Requires shadowed state to be loaded • Possible for most userlevel code • C code • Code running in the monitor context • Instruction, fault emulation • Binary Translation • Overcome “popf problem” by modifying the guest instruction stream • Requires slow translation but benefits from caching

  32. translator Binary Translator • Each invocation of the translator: • Consumes one Translation unit • Produces one compiled code fragment • Translate on demand, interleaved with execution • Store CCFs in a translation cache (TC) compiled codefragment (CCF) translation unit

  33. BT: Decoding • Read bytes at guest EIP • Guest memory: 55 ff 33 c7 03 ff ff ff ff 8b d4 … • Decoder converts this stream to “DecodeInfos” • struct DecodeInfo { int length; Opcodes opcode; Operands operand; int dispSize, immSize; … }

  34. Protecting the VMM (since it lives in guest’s address for BT perf.) • Need to protect VMM and ensure isolation • Protect virtual machines from each other • Protect VMM from virtual machines • VMware traditionally relies on memory segmentation hardware to protect the VMM • VMM lives at top of guest address space • Segment limit checks catch writes to VMM area VMM 4GB 0 Summary: since the VMM is in the same address space as guests (for BT performance benefits), segment limit checks protect the VMM

  35. CPU assists: Intel VT-x / AMD-V • CPU vendors are embracing virtualization • Intel Virtualization Technology (VT-x) • AMD-V • Key feature is new CPU execution mode (root mode) • VMM executes inroot mode • Allows x86 virtualizationwithout binary translationor paravirtualization • Non-root mode Apps Apps Ring 3 Guest OS Guest OS Ring 0 • Root mode VM exit VM enter Virtual Machine Monitor (VMM)

  36. 1st Generation CPU Assist • Initial VT-x/AMD-V hardware targets privileged instructions • HW is an enabling technology that makes it easier to write a functional VMM • Alternative to using binary translation • Initial hardware does not guarantee highest performance virtualization • VMware binary translation outperforms VT-x/AMD-V

  37. Challenges of Virtualizing x86-64 • Older AMD64 and Intel EM64T architectures did not include segmentation in 64-bit mode • How do we protect the VMM? • 64-bit guest support requires additional hardware assistance • Segment limit checks available in 64-bit mode on newer AMD processors • VT-x can be used to protect the VMM on EM64T • Requires trap-and-emulate approach instead of BT

  38. Memory Virtualization • One of the most challenging technical problems in virtualizing the x86 architecture

  39. VirtualMemory VA PhysicalMemory PA Review of the “Virtual Memory” concept Process 1 Process 2 • Modern operating systems provide virtual memory support • Applications see a contiguous address space that is not necessarily tied to underlying physical memory in the system • OS keeps mappings of virtual page numbers to physical page numbers • Mappings are stored in page tables • CPU includes memory management unit (MMU) and translation lookaside buffer (TLB) for virtual memory support 0 4GB 0 4GB

  40. Virtualizing Virtual Memory VM 1 VM 2 • In order to run multiple virtual machines on a single system, another level of memory virtualization must be done • Guest OS still controls mapping of virtual address to physical address: VA -> PA • In virtualized world, guest OS cannot have direct access to machine memory • Each guest’s physical memory is no longer the actual machine memory in system • VMM maps guest physical memory to the actual machine memory: PA -> MA Process 1 Process 2 Process 1 Process 2 VirtualMemory VA PhysicalMemory PA MachineMemory MA

  41. Virtualizing Virtual Memory: Shadow Page Tables VM 1 VM 2 • VMM uses “shadow page tables” to accelerate the mappings • Directly map VA -> MA • Can avoid the two levels of translation on every access • Leverage TLB hardware for this VA -> MA mapping • When guest OS changes VA -> PA, the VMM updates the shadow page tables Process 1 Process 2 Process 1 Process 2 VirtualMemory VA PhysicalMemory PA MachineMemory MA

  42. Future Hardware Assist at the Memory level • Both AMD and Intel have announced roadmap of additional hardware support • Memory virtualization (Nested paging, Extended Page Tables) • Device and I/O virtualization (VT-d, IOMMU)

  43. Nested Paging / Extended Page Tables VM 1 VM 2 • Hardware support for memory virtualization is on the way • AMD: Nested Paging / Nested Page Tables (NPT) • Intel: Extended Page Tables (EPT) • Conceptually, NPT and EPT are identical • Two sets of page tables exist: VA -> PA and PA -> MA • Processor HW does page walk for both VA -> PA and PA -> MA Process 1 Process 2 Process 1 Process 2 VirtualMemory VA PhysicalMemory PA MachineMemory MA

  44. Benefits of NPT/EPT • Performance • Compute-intensive workloads already run well with binary translation/direct execution • NPT/EPT will provide noticeable performance improvement for workloads with MMU overheads • Hardware addresses the performance overheads due to virtualizing the page tables • With NPT/EPT, even more workloads become candidates for virtualization • Reducing memory consumption • Shadow page tables consume additional system memory • Use of NPT/EPT will reduce “overhead memory” • Today, VMware uses HW assist in very limited cases • NPT/EPT provide motivation to use HW assist much more broadly • NPT/EPT require usage of AMD-V/VT-x

  45. VM VM VM VM Flexible VMM Architecture • Flexible “multi-mode” VMM architecture • Separate VMM per virtual machine • Select mode that achieves bestworkload-specific performancebased on CPU support • Today • 32-bit: BT • 64-bit: BT or VT-x • Tomorrow • 32-bit: BT or AMD-V/NPT or VT-x/EPT • 64-bit: BT or VT-x or AMD-V /NPT or VT-x/EPT • Same VMM architecture for ESX Server, Player, Server, Workstation and ACE . . . . . . BT/VTVMM64 BT VMM32 BT/VT VMM64 BT VMM32

  46. VMware Lab Manager

  47. VMware Solutions Virtual Desktop Infrastructure Application Lifecycle Management Next Generation Datacenter Operations Application Development & Testing Server Consolidation Resource Optimization Mobile Workforce High Availability Load Balancing Thin Client Application Support Business Continuity Rapid Provisioning Desktop Security Workstation Virtual Infrastructure 3 ACE

  48. VMware Solutions Virtual Desktop Infrastructure Application Lifecycle Management Next Generation Datacenter Operations Application Development & Testing Server Consolidation Resource Optimization Mobile Workforce High Availability Load Balancing Thin Client Application Support Business Continuity Rapid Provisioning Desktop Security VDM Lab Manager Workstation Virtual Infrastructure 3 ACE 2

  49. Lab Manager - Challenges • Server Sprawl in Development and Test Labs • 2-3 machines in application development and test for every server in production (+ storage, networking, heating …) • Little lab asset sharing between groups – static and captive equipment, even with development cycle peaks and valleys • Server-to-Staff Ratios exceed 7:1 in some cases • System Setup and Provisioning Overhead • Repetitive system setup tasks overwhelming IT and slowing software development cycles • Accounts for more than 50% of time expended in the development and test cycle • Reproducing and Troubleshooting Defects • Difficult to resolve software defects when requires specific environment or complex system state to reproduce

  50. What if you could… • Consolidate and efficiently share lab resources across development and test teams • Capitalize on development cycle ebb and flow • Eliminate static allocation of infrequently used resources to teams • Provide lab access and hosted desktops to your outsourcing partners • Provide secure self-service “check out” and provisioning of resources to AD teams while IT maintains control of the lab • Accelerate your software development cycles • Use IT resources more strategically – not for repetitive provisioning • Enable developers to reliably reproduce, troubleshoot and resolve software defects before putting applications into production • Slash time spent trying to recreate defect-exposing configurations • Reproduce problems discovered by remote QA resources • Eliminate “works fine on my machine” from the AD lexicon