1 / 42

Introduction to the Palacios VMM and the V3Vee Project

Introduction to the Palacios VMM and the V3Vee Project. John R. Lange Department of Computer Science University of Pittsburgh September 28th, 2010. Outline. V3Vee Project Multi-institutional collaboration Palacios Developed first VMM for scalable High Performance Computing (HPC)

joella
Download Presentation

Introduction to the Palacios VMM and the V3Vee Project

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to the Palacios VMMand the V3Vee Project John R. Lange Department of Computer Science University of Pittsburgh September 28th, 2010

  2. Outline • V3Vee Project • Multi-institutional collaboration • Palacios • Developed first VMM for scalable High Performance Computing (HPC) • Largest scale study of virtualization • Proved HPC virtualization is effective at scale • Current Research • Symbiotic Virtualization • Cloud <-> HPC integration • New system architectures

  3. Virtuoso Project (2002-2007)virtuoso.cs.northestern.edu • “Infrastructure as a Service” distributed/grid/cloud virtual computing system • Particularly for HPC and multi-VM scalable apps • First adaptivevirtual computing system • Drives virtualization mechanisms to increase the performance of existing, unmodified apps running in collections of VMs • Focus on wide-area computation R. Figueiredo, P. Dinda, J. Fortes, A Case for Grid Computing on Virtual Machines, Proceedings of the 23rd International Conference on Distributed Computing (ICDCS 2003), May, 2003.

  4. Virtuoso: Adaptive Virtual Computing • Providers sell computational and communication bandwidth • Users run collections of virtual machines (VMs) that are interconnected by overlay networks • Replacement for buying machines • That continuously adapts…to increase the performance of your existing, unmodified applications and operating systems See virtuoso.cs.northwestern.edu for many papers, talks, and movies

  5. V3VEE Overview • Goal: Create an original open source virtual machine monitor (VMM) frameworkfor x86/x64 that permits per use composition of VMMs optimized for… • High Performance Computing • Computer architecture research • Experimental computer systems research • Teaching Community resource development project (NSF CRI) X-Stack exascale systems software research (DOE)

  6. Collaborators

  7. People • University of Pittsburgh • Jack Lange, and more… • Northwestern University • Jack Lange, Peter Dinda, Russ Joseph, Lei Xia, Chang Bae, Giang Hoang, Fabian Bustamante, Steven Jaconette, Andy Gocke, Mat Wojick, Peter Kamm, Robert Deloatch, Yuan Tang, Steve Chen, Brad Weinberger Mahdav Suresh, and more… • University of New Mexico • Patrick Bridges, Zheng Cui, Philip Soltero, Nathan Graham, Patrick Widener, and more… • Sandia National Labs • Kevin Pedretti, Trammell Hudson, Ron Brightwell, and more… • Oak Ridge National Lab • Stephen Scott, Geoffroy Vallee, and more…

  8. Why a New VMM? Research in Systems, Cloud, and Datacenters Commercial Space Cloud / Datacenter Research and Use In High Performance Computing Teaching Virtual Machine Monitor Research in Computer Architecture Open Source Community broad user and developer base Well-served

  9. Why a New VMM? Research in Systems, Cloud, and Datacenters Commercial Space Cloud / Datacenter Research and Use In High Performance Computing Teaching Virtual Machine Monitor Research in Computer Architecture Open Source Community broad user and developer base Ill-served

  10. Why a New VMM? • Code Scale • Compactcodebase by leveraging hardware virtualization assistance • Code decoupling • Make virtualization support self-contained • Linux/Windows kernel experience unneeded • Make it possible for researchers and students to come up to speed quickly • Freedom: BSD license • Composition • Enable raw performance at large scales

  11. Palacios VMM • OS-independent embeddable virtual machine monitor • Developed at Northwestern and University of New Mexico • And now University of Pittsburgh • Open source and freely available • Users: • Kitten: Lightweight supercomputing OS from Sandia National Labs • MINIX 3 • Modified Linux • Successfully used on supercomputers, clusters (Infiniband and Ethernet), and servers • http://www.v3vee.org/palacios

  12. Embeddability • Palacios compiles to a static library • Compact host OS interface allows Palacios to be embedded in different Host OSes • Palacios adds virtualization support to an OS • Palacios + lightweight kernel = traditional “type-I” “hypervisor” VMM • Current embeddings • Kitten, GeekOS, Minix 3, Linux

  13. Kitten: An Open Source LWK http://code.google.com/p/kitten/ • Better match for user expectations • Provides mostly Linux-compatible user environment • Including threading • Supports unmodified compiler toolchains and ELF executables • Better match vendor expectations • Modern code-base with familiar Linux-like organization • Drop-in compatible with Linux • Infiniband support • End-goal is deployment on future capability system

  14. Palacios as an HPC VMM • Minimalist interface • Suitable for an LWK • Compile and runtime configurability • Create a VMM tailored to specific environments • Low noise • Contiguous memory pre-allocation • Passthrough resources and resource partitioning

  15. A Compact Type-I VMM KVM: 50k-60k lines + Kernel dependencies (??) + User level devices (180k) Xen: 580k lines (50k – 80k core)

  16. HPC Performance Evaluation • Virtualization is very useful for HPC, but… Only if it doesn’t hurt performance • Virtualized RedStorm with Palacios • Evaluated with Sandia’s system evaluation benchmarks 17th fastest supercomputer Cray XT3 38208 cores ~3500 sq ft 2.5 MegaWatts $90 million

  17. Scalability at Small Scales(Catamount) Within 5% Scalable HPCCG: conjugant gradient solver

  18. Large Scale Study • Evaluation on full RedStorm system • 12 hours of dedicated system time on full machine • Largest virtualization performance scaling study to date • Measured performance at exponentially increasing scales • Up to 4096 nodes • Publicity • New York Times • Slashdot • HPCWire • Communications of the ACM • PC World

  19. Scalability at Large Scale(Catamount) Within 3% Scalable CTH: multi-material, large deformation, strong shockwave simulation

  20. Infiniband on Commodity Linux (Linux guest on IB cluster) 2 node Infiniband Ping Pong bandwidth measurement

  21. Symbiotic Virtualization • Virtualization can scale • Near native performance for optimized VMM/guest (within 5%) • VMM needs to know about guest internals • Should modify behavior for each guest environment • Symbiotic Virtualization • Design both guest OS and VMM to minimize semantic gap • Bidirectional synchronous and asynchronous communication channels • Interfaces are optional and can be dynamically added by VMM

  22. Symbiotic Interfaces • SymSpy Passive Interface • Internal state already exists but it is hidden • Asynchronous bi-directional communication • via shared memory • Structured state information that is easily parsed • Semantically rich • SymCall Functional Interface • Synchronous upcalls into guest during exit handling • API • Function call in VMM • System call in Guest • Brand new interface construct • SymModCode Injection • Guest OS might lack functionality • VMM loads code directly into guest • Extend guest OS with arbitrary interfaces/functionality

  23. VNET Overlay Networking for HPC • Virtuoso project built “VNET/U” • User-level implementation • Overlay appears to host to be simple L2 network (Ethernet) • Ethernet in UDP encapsulation (among others) • Global control over topology and routing • Works with any VMM • Plenty fast for WAN (200 Mbps), but not HPC • V3VEE project is building “VNET/P” • VMM-level implementation in Palacios • Goal: Near-native performance on Clusters and Supers for at least 10 Gbps • Also: easily migrate existing apps/VMs to specialized interconnect networks

  24. Current VNET/P Node Architecture Application VM Service VM

  25. Conclusion • Palacios: original open-source VMM for modern architectures • Small and Compact • Easy to understand and extend/modify • Framework for exploring alternative VMM architectures V3VEE Project http://v3vee.org Collaborators Welcome!

  26. Backup Slides

  27. Current Research • Virtualization can scale • Near native performance for optimized VMM/guest (within 5%) • VMM needs to know about guest internals • Should modify behavior for each guest environment • Example: Paging method to use depends on guest • Black Box inference is not desirable in HPC environment • Unacceptable performance overhead • Convergence time • Mistakes have large consequences • Need guest cooperation • Guest and VMM relationship should be symbiotic (Thesis)

  28. Semantic Gap • VMM architectures are designed as black boxes • Explicit lowlevel OS interface (hardware or paravirtual) • Internal OS state is not exposed to the VMM • Many uses for internal state • Performance, security, etc... • VMM must recreate that state • “Bridging the Semantic Gap” • [Chen: HotOS 2001] • Two existing approaches: Black Box and Gray Box • Black Box: Monitor external guest interactions • Gray Box: Reverse engineer internal guest state • Examples • Virtuoso Project (Early graduate work) • Lycosid, Antfarm, Geiger, IBMon, many others

  29. Example: Swapping • Disk storage for expanding physical memory Application Memory Working Set Swapped Memory Physical Memory Guest VMM Swap Disk Only basic knowledge without internal state

  30. SymSpy • Shared memory page between OS and VMM • Global and per-core interfaces • Standardized data structures • Shared state information • Read and write without exits

  31. SymCall (Symbiotic Upcalls) • Conceptually similar to System Calls • System Calls: Application requests OS services • Symbiotic Upcalls: VMM requests OS services • Designed to be architecturally similar • Virtual hardware interface • Superset of System Call MSRs • Internal OS implementation • Share same system call data structures and basic operations • Guest OS configures a special execution context • VMM instantiates that context to execute synchronous upcall • Symcalls exit via a dedicated hypercall

  32. SymCall Control Flow Running in guest Nested Exits Return to VMM Handle exit

  33. Implementation • Symbiotic Linux guest OS • Exports SymSpy and SymCall interfaces • Palacios • Fairly significant modifications to enable nested VM entries • Re-entrant exit handlers • Serialize subset of guest state out of global hardware structures

  34. SwapBypass • Purpose: improve performance when swapping • Temporarily expand guest memory • Completely bypass the Linux swap subsystem • Enabled by SymCall • Not feasible without symbiotic interfaces • VMM detects guest thrashing • Shadow page tables used to prevent it

  35. Symbiotic Shadow Page tables Guest Page Tables Shadow Page Tables Page Directory Page Table Page Directory Page Table Physical Memory Physical Memory Swap Disk Cache Swapped out page Swap Bypass Page

  36. SwapBypass Concept Application Working Set Guest 3 Guest 2 Swapped Memory Guest Physical Memory Swap Disk VMM physical Memory Global Swap Disk Cache Guest 3 Guest 2 Guest 1 Swap Disk

  37. Necessary SymCall: query_vaddr() • Get current process ID • get_current(); (Internal Linux API) • Determine presence in Linux swap cache • find_get_page(); (Internal Linux API) • Determine page permissions for virtual address • find_vma(); (Internal Linux API) • Information extremely hard to get otherwise • Must be collected while exit is being handled

  38. Evaluation • Memory system micro-benchmarks • Stream, GUPS, ECT Memperf • Configured to over commit anonymous memory • Cause thrashing in the guest OS • Overhead isolated to swap subsystem • Ideal swap device implemented as RAM disk • I/O occurs at main memory speeds • Provides lower bound for performance gains

  39. Bypassing Swap Overhead Performance improves Ideal I/O improvement Working set size Stream: simple vector kernel

  40. OS Drivers/Modules • Access standard OS driver API • Dependent on internal OS implementation

  41. Standard Symbiotic Modules • Guest exposes standard symbiotic API via SymSpy

  42. Secure Symbiotic Modules • Symbiotic API, protected from guest • Secure initialization • Virtual memory overlay

More Related