1 / 38

CS203A Computer Architecture Lecture 15 Virtual Memory

CS203A Computer Architecture Lecture 15 Virtual Memory. Virtual Memory. Idea 1: Many Programs sharing DRAM Memory so that context switches can occur Idea 2: Allow program to be written without memory constraints – program can exceed the size of the main memory

luana
Download Presentation

CS203A Computer Architecture Lecture 15 Virtual Memory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS203A Computer ArchitectureLecture 15Virtual Memory

  2. Virtual Memory • Idea 1: Many Programs sharing DRAM Memory so that context switches can occur • Idea 2: Allow program to be written without memory constraints – program can exceed the size of the main memory • Idea 3: Relocation: Parts of the program can be placed at different locations in the memory instead of a big chunk. • Virtual Memory: (1) DRAM Memory holds many programs running at same time (processes) (2) use DRAM Memory as a kind of “cache” for disk

  3. Memory Hierarchy: The Big Picture Data movement in a memory hierarchy.

  4. Virtual Memory has own terminology • Each process has its own private “virtual address space” (e.g., 232 Bytes); CPU actually generates “virtual addresses” • Each computer has a “physical address space” (e.g., 128 MegaBytes DRAM); also called “real memory” • Address translation: mapping virtual addresses to physical addresses • Allows multiple programs to use (different chunks of physical) memory at same time • Also allows some chunks of virtual memory to be represented on disk, not in main memory (to exploit memory hierarchy)

  5. Heap Mapping Virtual Memory to Physical Memory Virtual Memory ¥ • Divide Memory into equal sized“chunks” (say, 4KB each) Stack • Any chunk of Virtual Memory assigned to any chunk of Physical Memory (“page”) Physical Memory 64 MB Single Process Heap Static Code 0 0

  6. Handling Page Faults • A page fault is like a cache miss • Must find page in lower level of hierarchy • If valid bit is zero, the Physical Page Number points to a page on disk • When OS starts new process, it creates space on disk for all the pages of the process, sets all valid bits in page table to zero, and all Physical Page Numbers to point to disk • called Demand Paging- pages of the process are loaded from disk only as needed • Create “swap” space for all virtual pages on disk

  7. Comparing the 2 levels of hierarchy • Cache Virtual Memory • Block or Line Page • Miss Page Fault • Block Size: 32-64B Page Size: 4K-16KB • Placement: Fully AssociativeDirect Mapped, N-way Set Associative • Replacement: Least Recently UsedLRU or Random (LRU) approximation • Write Thru or Back Write Back • How Managed: Hardware + SoftwareHardware (Operating System)

  8. How to Perform Address Translation? • VM divides memory into equal sized pages • Address translation relocates entire pages • offsets within the pages do not change • if make page size a power of two, the virtual address separates into two fields: • like cache index, offset fields virtual address Virtual Page Number Page Offset

  9. Mapping Virtual to Physical Address Virtual Address 31 30 29 28 27 .………………….12 11 10 9 8 ……..……. 3 2 1 0 Virtual Page Number Page Offset 1KB page size Translation Physical Page Number Page Offset 9 8 ……..……. 3 2 1 0 29 28 27 .………………….12 11 10 Physical Address

  10. Address Translation • Want fully associative page placement • How to locate the physical page? • Search impractical (too many pages) • A page table is a data structure which contains the mapping of virtual pages to physical pages • There are several different ways, all up to the operating system, to keep this data around • Each process running in the system has its own page table

  11. Page Table Register index into page table V 0 A.R. A.R. P. P. N. Address Translation: Page Table Virtual Address (VA): offset virtual page nbr Page Table ... V A.R. P. P. N. + Access Rights Physical Page Number Val -id Physical Memory Address (PA) ... Page Table is located in physical memory Access Rights: None, Read Only, Read/Write, Executable disk

  12. Page Tables and Address Translation The role of page table in the virtual-to-physical address translation process.

  13. Protection and Sharing in Virtual Memory Virtual memory as a facilitator of sharing and memory protection.

  14. Optimizing for Space • Page Table too big! • 4GB Virtual Address Space ÷ 4 KB page 220 (~ 1 million) Page Table Entries 4 MB just for Page Table of single process! • Variety of solutions to tradeoff Page Table size for slower performance when miss occurs in TLB Use a limit register to restrict page table size and let it grow with more pages,Multilevel page table, Paging page tables, etc. (Take O/S Class to learn more)

  15. Hierarchical Page Tables • To reduce the size of the page table • 1-level page table is too expensive for large virtual addr. Space • E.g. #1: a 64-bit CPU, 4KB pages, 4B PTE, page table size=2^64/2^12*4=2^54 Bytes=2^14TB • E.g. #2: a 32-bit CPU, with PAE enabled, supports up to 64G memory space in linux • Solution • To create small page tables for the top (heap) and bottom (stack) part of virtual memory • The virtual addr. is now split into multiple chunks to index a page table “tree” • Example • X86 paging (with PAE, without PSE): next page • AMD Operon: Text book Append. C-54

  16. x86 hierarchical paging

  17. How Translate Fast? • Problem: Virtual Memory requires two memory accesses! • one to translate Virtual Address into Physical Address (page table lookup) • one to transfer the actual data (cache hit) • But Page Table is in physical memory! => 2 main memory accesses! • Observation: since there is locality in pages of data, must be locality in virtual addresses of those pages! • Why not create a cache of virtual to physical address translations to make translation fast? (smaller is faster) • For historical reasons, such a “page table cache” is called a Translation Lookaside Buffer, or TLB

  18. Typical TLB Format Virtual Physical Valid Ref Dirty Access Page Nbr Page Nbr Rights “data” “tag” • TLB just a cache of the page table mappings • Dirty: since use write back, need to know whether or not to write page to disk when replaced • Ref: Used to calculate LRU on replacement • TLB access time comparable to cache (much less than main memory access time)

  19. Translation Look-Aside Buffers • TLB is usually small, typically 32-4,096 entries • Like any other cache, the TLB can be fully associative, set associative, or direct mapped data data virtualaddr. physicaladdr. TLB Cache miss Main Memory hit hit Processor miss PageTable Disk Memory OS FaultHandler page fault/protection violation

  20. Translation Lookaside Buffer Virtual-to-physical address translation by a TLB and how the resulting physical address is used to access the cache memory.

  21. DECStation 3100/MIPS R2000 3 1 3 0 2 9 1 5 1 4 1 3 1 2 1 1 1 0 9 8 3 2 1 0 Virtual Address V i r t u a l p a g e n u m b e r P a g e o f f s e t 2 0 1 2 V a l i d D i r t y P h y s i c a l p a g e n u m b e r T a g TLB T L B h i t 64 entries, fully associative 2 0 P h y s i c a l p a g e n u m b e r P a g e o f f s e t Physical Address P h y s i c a l a d d r e s s t a g C a c h e i n d e x B y t e 1 4 2 1 6 o f f s e t T a g D a t a V a l i d Cache 16K entries, direct mapped 3 2 D a t a C a c h e h i t

  22. Introduction to Virtual Machines VMs developed in late 1960s Remained important in mainframe computing over the years Largely ignored in single user computers of 1980s and 1990s Recently regained popularity due to increasing importance of isolation and security in modern systems, failures in security and reliability of standard operating systems, sharing of a single computer among many unrelated users, and the dramatic increases in raw speed of processors, which makes the overhead of VMs more acceptable

  23. What is a Virtual Machine (VM)? Broadest definition includes all emulation methods that provide a standard software interface, such as the Java VM “(Operating) System Virtual Machines” provide a complete system level environment at binary ISA Here assume ISAs always match the native hardware ISA E.g., IBM VM/370, VMware ESX Server, and Xen Present illusion that VM users have entire computer to themselves, including a copy of OS Single computer runs multiple VMs, and can support a multiple, different OSes On conventional platform, single OS “owns” all HW resources With a VM, multiple OSes all share HW resources Underlying HW platform is called the host, and its resources are shared among the guest VMs

  24. Virtual Machine Monitors (VMMs) Virtual machine monitor (VMM) or hypervisor is software that supports VMs VMM determines how to map virtual resources to physical resources Physical resource may be time-shared, partitioned, or emulated in software VMM is much smaller than a traditional OS; isolation portion of a VMM is  10,000 lines of code

  25. Other Uses of VMs Focus here on protection 2 Other commercially important uses of VMs Managing Software VMs provide an abstraction that can run the complete SW stack, even including old OSes like DOS Typical deployment: some VMs running legacy OSes, many running current stable OS release, few testing next OS release Managing Hardware VMs allow separate SW stacks to run independently yet share HW, thereby consolidating number of servers Some run each application with compatible version of OS on separate computers, as separation helps dependability Migrate running VM to a different computer Either to balance load or to evacuate from failing HW

  26. Requirements of a Virtual Machine Monitor (Overview) A VM Monitor Presents a SW interface to guest software, Isolates state of guests from each other, and Protects itself from guest software (including guest OSes) Guest software should behave on a VM exactly as if running on the native HW Except for performance-related behavior or limitations of fixed resources shared by multiple VMs Guest software should not be able to change allocation of real system resources directly Hence, VMM must control  everything even though guest VM and OS currently running is temporarily using them Access to privileged state, Address translation, I/O, Exceptions and Interrupts, …

  27. Requirements of a Virtual Machine Monitor (Implementation) 2 modes: VMM must be at higher privilege level than guest VM, which generally run in user mode Execution of privileged instructions handled by VMM E.g., Timer interrupt: VMM suspends currently running guest VM, saves its state, handles interrupt, determine which guest VM to run next, and then load its state Guest VMs that rely on timer interrupt provided with virtual timer and an emulated timer interrupt by VMM Requirements of system virtual machines are  same as paged-virtual memory: At least 2 processor modes, system and user Privileged subset of instructions available only in system mode, trap if executed in user mode All system resources controllable only via these instructions

  28. ISA Support for Virtual Machines Why x86 is not virtualizable Guest VM is aware of virtualization Read Code Segment selector (%cs) since the current privilege level (CPL) is stored in the low two bits of %cs. Non-trappable inst. Some inst cannot be trapped in user space, e.g. popf Solution s/w Binary translation h/w Virtual Machine Control Block (VMCB) + “Guest mode”

  29. Impact of VMs on Virtual Memory Virtualization of virtual memory if each guest OS in every VM manages its own set of page tables? VMM separates real and physical memory Makes real memory a separate, intermediate level between virtual memory and physical memory Some use the terms virtual memory, physical memory, and machine memory to name the 3 levels Guest OS maps virtual memory to real memory via its page tables, and VMM page tables map real memory to physical memory VMM maintains a shadow page table that maps directly from the guest virtual address space to the physical address space of HW Rather than pay extra level of indirection on every memory access VMM must trap any attempt by guest OS to change its page table or to access the page table pointer

  30. Impact of I/O on Virtual Memory Most difficult part of virtualization Increasing number of I/O devices attached to the computer Increasing diversity of I/O device types Sharing of a real device among multiple VMs, Supporting the myriad of device drivers that are required, especially if different guest OSes are supported on the same VM system Give each VM generic versions of each type of I/O device driver, and let VMM to handle real I/O Method for mapping virtual to physical I/O device depends on the type of device: Disks partitioned by VMM to create virtual disks for guest VMs Network interfaces shared between VMs in short time slices, and VMM tracks messages for virtual network addresses to ensure that guest VMs only receive their messages

  31. Example: Xen VM Xen: Open-source System VMM for 80x86 ISA Project started at University of Cambridge, GNU license model Original vision of VM is running unmodified OS Significant wasted effort just to keep guest OS happy “paravirtualization” - small modifications to guest OS to simplify virtualization 3 Examples of paravirtualization in Xen: To avoid flushing TLB when invoke VMM, Xen mapped into upper 64 MB of address space of each VM Guest OS allowed to allocate pages, just check that didn’t violate protection restrictions To protect the guest OS from user programs in VM, Xen takes advantage of 4 protection levels available in 80x86 Most OSes for 80x86 keep everything at privilege levels 0 or at 3. Xen VMM runs at the highest privilege level (0) Guest OS runs at the next level (1) Applications run at the lowest privilege level (3)

  32. Example: Xen VM* Xen I/O Dom0 (Driver Domain) Phy. INT Demultiplex Virt. INT Hypervisor (VMM) DomU *D. Guo, et al., “Performance Characterization and Cache-Aware Core Scheduling in a Virtualized Multi-Core Server under 10GbE ”, IISWC 2009, Austin TX.

  33. VMM Overhead? Depends on the workload User-level processor-bound programs (e.g., SPEC) have zero-virtualization overhead Runs at native speeds since OS rarely invoked I/O-intensive workloads OS-intensive  execute many system calls and privileged instructions  can result in high virtualization overhead For System VMs, goal of architecture and VMM is to run almost all instructions directly on native hardware If I/O-intensive workload is also I/O-bound low processor utilization since waiting for I/O  processor virtualization can be hidden  low virtualization overhead

  34. Performance Characterization (1/4)* Overview of VM overhead # of concurrent sessions Ping-pong latency Supported Workload Comparison in SPECweb2005 Ping-Pong Latency Comparison of SPECweb2005 Degradation: 31% for Banking, 38% for Ecommerce and 45% for Support *D. Guo, et al., “Performance Characterization and Cache-Aware Core Scheduling in a Virtualized Multi-Core Server under 10GbE ”, IISWC 2009, Austin TX.

  35. Performance Characterization (2/4) Architectural Characterization CPI: related to instr. distro. (more mem access instr.) Up by 50%, 37% and 30% for Support, Banking and Ecommerce Cache and TLB L1 Icache & L2 cache: up by 1.7X for Support TLB: up by 30% (a) Support (b) Banking (c) Ecommerce Architectural events for different workload. It also illustrates the individual contribution of VMM, Dom0 and DomU-Linux

  36. Performance Characterization (3/4) Weight of architectural events Native VS VM Less normal exe. in VM Additional VMM overhead VM L2: main contributor Poor utilization of the arch. Scaled Weight of Each Architectural Event towards SPECweb2005 Execution.

  37. Performance Characterization (4/4) Life-of-packet Analysis (Functional Analysis) How to reduce the L2 miss overhead from VMM scheduler? Credit Scheduler and Grant table takes up 60% of the L2 misses

  38. Suggested Reading List • General • P. Barham, et al., “Xen and the Art of Virtualization”, SOSP 2003. • I/O optimization • A. Menon, et. al., “Diagnosing Performance Overheads in the Xen Virtual Machine Environment”, VEE ’05. • A. Menon, et al., “Optimizing Network Virtualization in Xen”, USENIX ’06 • J. R. Santos, et al., "Bridging the Gap between Software and Hardware Techniques for I/O Virtualization", USENIX 2008 • VM architecture optimization • P. Willmann, et al., “Concurrent Direct Network Access for Virtual Machine Monitors”, HPCA 2007. • K. Adams, et al., "A Comparison of Software and Hardware Techniques for x86 Virtualization", ASPLOS 2006. • Scheduling • D. Ongaro, et al., “Scheduling I/O in Virtual Machine Monitors”, VEE’08 • D. Guo, et al., “Performance Characterization and Cache-Aware Core Scheduling in a Virtualized Multi-Core Server under 10GbE ”, IISWC 2009, Austin TX.

More Related