1 / 32

Redesigning Xen Memory Sharing (Grant) Mechanism

Redesigning Xen Memory Sharing (Grant) Mechanism. Kaushik Kumar Ram (Rice University) Jose Renato Santos (HP Labs) Yoshio Turner (HP Labs) Alan L. Cox (Rice University) Scott Rixner (Rice University) Xen Summit Aug 2 nd 2011. This talk….

kenyon
Download Presentation

Redesigning Xen Memory Sharing (Grant) Mechanism

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Redesigning Xen Memory Sharing (Grant) Mechanism Kaushik Kumar Ram (Rice University) Jose Renato Santos (HP Labs) Yoshio Turner (HP Labs) Alan L. Cox (Rice University) Scott Rixner (Rice University) Xen Summit Aug 2nd 2011

  2. This talk… • Will make a case for redesigning the grant mechanism to achieve better I/O performance and for other benefits • Will propose an alternate design for the grant mechanism • Will present an evaluation of a prototype of this new design Xen Summit 2011

  3. Outline • Motivation • Proposal • A grant reuse scheme • Evaluation • Conclusion Xen Summit 2011

  4. Traditional I/O Virtualization frontend backend Two level memory sharing Guest Domain Driver Domain Guest domain –driver domain memory sharing (grant mechanism) Physical Driver Xen Hypervisor Driver domain –device memory sharing (IOMMU) Device Hardware Xen Summit 2011

  5. Direct Device Assignment One level memory sharing Guest Domain Guest domain –device memory sharing (IOMMU) Physical Driver Xen Hypervisor Device Hardware Xen Summit 2011

  6. Grant Mechanism Controlled memory sharing between domains • Source domain can share its memory pages with a specific destination domain • Destination domain can validate that the shared pages belong to the source domain via the hypervisor Xen Summit 2011

  7. Creating Shared Memory using Grant Mechanism Source Domain • Creates grant entry in grant table Destination Domain • Issues grant hypercall • Hypervisor validates grant and maps source page Destination Domain Source Domain grant reference Grant Table Hypercall Xen Hypervisor Hardware Xen Summit 2011

  8. Revoking Shared Memory using Grant Mechanism Destination Domain • Issues grant hypercall • Hypervisor unmaps page Source Domain • Deletes grant entry from grant table Source Domain Destination Domain Grant Table Hypercall Xen Hypervisor Hardware Xen Summit 2011

  9. IOMMU To safely share memory with I/O devices • Maintain memory isolation between domains (direct device assignment) • Protect against device driver bugs • Protect against attacks exploiting device DMA Memory IOMMU Table I/O Device Machine Address I/O Address Xen Summit 2011

  10. Sharing Memory via IOMMUs • Para-virtualized I/O :- • Fine-grained sharing • IOMMU mapping setup during grant map hypercall and revoked during grant unmaphypercall • Direct Device Assignment :- • Only coarse-grained sharing Xen Summit 2011

  11. High Memory Sharing Overhead • I/O page is shared only for the duration of a single I/O • High cost of grant hypercalls and mapping/unmapping incurred in driver domain on every I/O operation Xen Summit 2011

  12. Reuse Scheme to Reduce Overhead • Take advantage of temporal and/or spatial locality in use of I/O pages • Reuse grants when I/O pages are reused • Reduce grant issue and revoke operations • Reduce grant hypercalls and mapping/unmapping overheads in driver domain Xen Summit 2011

  13. Reuse Under Existing Grant Mechanism • Grant reuse scheme requires – • Not revoking grants after every I/O operation • Persistent mapping of guest I/O pages in driver domain • Grants can be revoked when pages re-purposed for non-I/O operations • Today, there exists no way for guest domain to revoke access when its page is still mapped in driver domain Xen Summit 2011

  14. Goals • Enable reuse to reduce memory sharing related overheads during I/O • Support unilateral revocation of grants by source domains • Support an unified interface to share memory with I/O devices via IOMMUs Xen Summit 2011

  15. Proposal • Move the grant related hypercalls to the guest domains • Guest domains directly interact with the hypervisor to issue and revoke grants Guest Domain Driver Domain Grant Table Hypercall Hypercall Xen Hypervisor Hardware Xen Summit 2011

  16. Redesigned Grant Mechanism1. Initialization • INIT1 hypercall (para-virtualized I/O only) • Registers a virtual address range • Base address(es) and size • INIT2 hypercall • Provides a “device_id” • Returns the size of the “grant address space” • 0 – size of address range Guest Domain Driver Domain INIT2 Hypercall INIT1 Hypercall Xen Hypervisor Hardware Xen Summit 2011

  17. Grant (I/O) Address Space 0x20000 0x10000 Size of address range 0x40000 0x10000 0x0 Grant address space 0x30000 Driver domain virtual address space (page table) I/O virtual address space (IOMMU table) Xen Summit 2011

  18. Redesigned Grant Mechanism2. Creating Shared Memory • Guest Domain : • Picks a “grant reference” • Offset within grant address space • Issues grant MAP hypercall • Hypervisor validates grant and maps guest page • Driver Domain : • Translates grant reference into virtual address and I/O address Guest Domain Driver Domain grant reference MAP Hypercall Xen Hypervisor Setup IOMMU mapping Hardware Xen Summit 2011

  19. Grant Mapping 0x20000 Grant reference 0x10000 0x40000 0x7000 0x10000 0x0 Grant address space 0x30000 Driver domain virtual address space (page table) I/O virtual address space (IOMMU table) Xen Summit 2011

  20. Redesigned Grant Mechanism2. Creating Shared Memory • Guest Domain : • Picks a “grant reference” • Offset within grant address space • Issues grant MAP hypercall • Hypervisor validates grant and maps guest page • Driver Domain : • Translates grant reference into virtual address and I/O address Guest Domain Driver Domain grant reference MAP Hypercall Xen Hypervisor Setup IOMMU mapping Hardware Xen Summit 2011

  21. Grant Mapping 0x20000 Grant reference 0x10000 0x17000 0x40000 0x10000 0x37000 0x0 Grant address space 0x30000 Driver domain virtual address space (page table) I/O virtual address space (IOMMU table) Xen Summit 2011

  22. Redesigned Grant Mechanism3. Revoking Shared Memory • Guest Domain : • Issues grant UNMAP hypercall • Provides grant reference • Hypervisor unmaps page Guest Domain Driver Domain UNMAP Hypercall Xen Hypervisor Remove IOMMU mapping Hardware Xen Summit 2011

  23. Unilateral Revocation • Guest domains can revoke grants any time by issuing grant UNMAP hypercall • No driver domain participation required • Safe to revoke grants even when the I/O pages are in use • Since corresponding IOMMU mappings are also removed Xen Summit 2011

  24. Unified Interface Grant hypercall interface can be invoked from the Guest DMA library Guest Domain SRIOV VF Driver netfront DMA Library Xen Hypervisor Hardware IOMMU Xen Summit 2011

  25. Grant Reuse • Take advantage of temporal and/or spatial locality in use of I/O pages • Reuse grants when I/O pages are reused • Reuse grants across multiple I/O operations • Guest domain issues grant • Driver domain uses I/O page for multiple I/O operations • Guest domain revokes grant • Guest domains can implement any scheme to reuse grants • Relax safety constraints • Security vs performance trade-off • Shared mappings, delayed invalidations, optimistic tear-down etc. Xen Summit 2011

  26. A Grant Reuse Scheme • Security compromise – prevents corruption of non-I/O pages • Policy – Never share a non-I/O read-write page • Receive read-write sharing • Allocate I/O buffers from a dedicated pool • E.g. slab cache in Linux • Revoke grant when pages are reaped from pool • I/O buffer pool also promotes temporal locality • Transmit read-only sharing • Persistent sharing • Grants revoked only when there are no more grant references available (or keep it mapped always) Xen Summit 2011

  27. Evaluation - Setup and Methodology • Server Configuration • HP Proliant BL460c G7 Blade server • Intel Xeon X5670 – 6 CPU cores • 32 GB RAM • 2 embedded 10 GbE ports • Domain Configuration • Domain0 • linux 2.6.32.40 pvops kernel and 1 GB memory • Driver Domain • linux-2.6.18.8-xen0 (modified) and 512 MB memory • Guest Domains • linux-2.6.18.8-xenU (modified) and 512 MB memory • Driver and guest domains configured with one VCPU each (pinned) • Netperf TCP Streaming tests Xen Summit 2011

  28. Evaluation - Transmit Results • mapcount() logic significantly affects performance (baseline with IOMMU) Xen Summit 2011

  29. Evaluation - Receive Results • No IOMMU overhead during RX • Driver domain bottleneck (Baseline) Xen Summit 2011

  30. Evaluation – Inter-guest Results • Driver domain bottleneck (Baseline) Xen Summit 2011

  31. Discussion • Supporting multiple mappings in driver domain (e.g. block tap interface) • Driver domain can register address ranges from multiple address spaces • Or use hardware-assisted memory virtualization • Cannot support unilateral revocation without IOMMUs • Cannot revoke grants to in-use pages Xen Summit 2011

  32. Conclusions • Made a case for redesigning the grant mechanism • Enable grant reuse • Support unilateral revocations • Support an unified interface to program IOMMUs • Proposed an alternate design where the source domain interacts directly with the hypervisor • Implemented and evaluated a reuse scheme Xen Summit 2011

More Related