1 / 20

Virtual Machine Memory Access Tracing With Hypervisor Exclusive Cache

Virtual Machine Memory Access Tracing With Hypervisor Exclusive Cache. USENIX ‘07 Pin Lu & Kai Shen Department of Computer Science University of Rochester. Motivation. Virtual Machine (VM) memory allocation Lack of OS information at the hypervisor The existing approaches do not work well

raoul
Download Presentation

Virtual Machine Memory Access Tracing With Hypervisor Exclusive Cache

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Virtual Machine Memory Access Tracing With Hypervisor Exclusive Cache USENIX ‘07 Pin Lu & Kai Shen Department of Computer Science University of Rochester

  2. Motivation • Virtual Machine (VM) memory allocation • Lack of OS information at the hypervisor • The existing approaches do not work well • Static allocation  inefficient memory utilization & sub-optimized performance • Working set sampling (i.e., VMware ESX server)  limited information to support flexible allocation, work poorly under workloads with little data reuse

  3. Miss Ratio Curve • Miss ratio curve (MRC) • The miss rates at different memory allocation sizes • Allows flexible allocation objectives • The reuse distance distribution Y: page miss ratio 2.0 C1 C2 0.5 1.0 0.5 X: memory size Current allocation size

  4. Related Work on MRC Estimation • Geiger (Jones et al., ASPLOS 2006) • Append a ghost buffer in addition to VM memory • Reuse distance is tracked through I/O • Dynamic Tracking of MRC for Memory Management (Zhou et al., ASPLOS 2004) & CRAMM (Yang el at, OSDI 2006) • Protecting the LRU pages • Reuse distance is tracked through memory accesses • Transparent Contribution of Memory (Cipar et al., USENIX 2006) • Periodically sampling the access bits to approximate the memory access traces

  5. Estimate VM MRC with Hypervisor Cache • The hypervisor cache approach • Part of VM memory  cache managed by the hypervisor • Memory accesses are tracked by cache references • Low overhead & requires minimal VM information VM memory Virtual Machine Data misses Hypervisor Cache Hypervisor Data misses Storage

  6. Outline • The Hypervisor Cache • Design • Transparency & overhead • Evaluation • MRC-directed Multi-VM Memory Allocation • Allocation Policy • Evaluation • Summary& Future Work

  7. VM memory VM direct memory Hypervisor cache VM direct memory Hypervisor cache VM direct memory Hypervisor cache Design • Track MRC with VM memory allocation • Part of VM memory  Hypervisor cache • Exclusive cache • Caching efficiency • Comparable miss rate • HCache does not incur extra miss rate if LRU is employed • Data admission from VM • Avoid expensive storage I/O

  8. Cache Correctness • Cache contents need to be correct • i.e., matching with the storage location • Challenging because hypervisor has very limited information • VM data eviction notification • The VM OS notifies the hypervisor about a page eviction/release • Page to storage location and the reverse two-way mapping tables • Each VM I/O request  mappings are inserted in both mapping tables • Each VM page eviction  data is admitted with consultation to the mapping tables

  9. Design Transparency & Overhead • Current design is not transparent • Explicit page eviction notification from VM OS • The changes are small, fit well in para-virtualization • Reuse time inference techniques (Geiger) are not appropriate • The page may have already been changed – too late to admit it from VM • System overhead • Cache and mapping table management • Minor page faults • Page eviction notification

  10. System Workflow • More complete page miss rate info • Smaller VM direct memory (larger hypervisor cache) • The cache can be kept permanently (no step 3) • If overhead is not tolerable

  11. Read Write Eviction Xen0 XenU Cache & Tables Front-end Back-end Read, Write Storage Prototype Implementation • Hypervisor Xen 3.0.2 with VM OS Linux 2.6.16 • Page eviction as new type of VM I/O request • Hypervisor cache populated through ballooning • HCache and mapping tables maintained at Xen0 backend driver • Page copying to transfer data

  12. Hypervisor Cache Evaluation • Goals • Evaluate caching performance, overhead & MRC prediction accuracy • VM Workloads • I/O bound • Specweb99 • Keyword searching • TPC-C like • CPU bound • TPC-H like

  13. Throughput Results • Total VM memory is 512MB • Hypervisor cache sizes: 12.5%, 25%, 50%, and 75% of total VM memory

  14. CPU Overhead Results • Total VM memory is 512MB • Hypervisor cache sizes: 12.5%, 25%, 50%, and 75% to VM memory

  15. MRC Prediction Results

  16. Outline • The Hypervisor Cache • Design • Transparency and overhead • Evaluation • MRC-directed Multi-VM Memory Allocation • Allocation Policy • Evaluation • Summary& Future Work

  17. MRC-directed Multi-VM Memory Allocation • More Complete VM MRC via Hypervisor cache • Provides detailed miss rates at different memory sizes • Flexible VM memory allocation policies • Isolated Sharing Policy • Maximize system-wide performance • e.g., lower the geometric mean of all VMs’ miss rates • Constrained individual VM performance degradation • e.g., any of the VM does not suffer extra α% more miss rate

  18. Isolated Sharing Experiments • Base allocation of 512MB each; minimize geo. mean of miss ratios • Isolation constraint at 5% ⇒ achieve mean miss ratio of 0.85 • Isolation constraint at 25% ⇒ achieve mean miss ratio of 0.41

  19. Comparison with VMware ESX Server • ESX server policy and Isolated sharing with 0% tolerance • Both work well when the VM working set (around 330MB) VM fits in the VM memory (512MB) • Add a noise background workload that slowly scans through a large dataset • VM MRC identifies that the VM does not benefit from extra memory • ESX server estimates the working set size at 800MB, preventing memory reclamation

  20. Summary and Future Work • Summary • VM MRC estimation via Hypervisor Cache • Features, design and implementation • MRC-directed multi-VM memory allocation • Future Work • Improving the transparency of HCache • Reducing the overhead of HCache • Generic hypervisor buffer cache

More Related