Virtual Memory - PowerPoint PPT Presentation

virtual memory n.
Skip this Video
Loading SlideShow in 5 Seconds..
Virtual Memory PowerPoint Presentation
Download Presentation
Virtual Memory

play fullscreen
1 / 85
Virtual Memory
Download Presentation
Download Presentation

Virtual Memory

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Virtual Memory

  2. Outline • Virtual Space • Address translation • Accelerating translation • with a TLB • Multilevel page tables • Different points of view • Suggested reading: 10.1~10.6 TLB: Translation lookaside buffers

  3. 10.1 Physical and Virtual Addressing

  4. Physical Addressing • Attributes of the main memory • Organized as an array of M contiguous byte-sized cells • Each byte has a unique physical address (PA) started from 0 • physical addressing • A CPU use physical addresses to access memory • Examples • Early PCs, DSP, embedded microcontrollers, and Cray supercomputers Contiguous: 临近的

  5. Physical Addressing Figure 10.1 P693

  6. Virtual Addressing • Virtual addressing • the CPU accesses main memory by a virtual address (VA) • The virtual address is converted to the appropriate physical address

  7. Virtual Addressing • Address translation • Converting a virtual address to a physical one • requires close cooperation between the CPU hardware and the operating system • the memory management unit (MMU) • Dedicated hardware on the CPU chip to translate virtual addresses on the fly • A look-up table • Stored in main memory • Contents are managed by the operating system

  8. Figure 10.2 P694

  9. 10.2 Address Space

  10. Address Space • Address Space • An ordered set of nonnegative integer addresses • Linear Space • The integers in the address space are consecutive • N-bit address space

  11. Address Space • K=210(Kilo), M=220(Mega), G=230(Giga), T=240(Tera), P=250(Peta), E=260(Exa) 256 255 16 64K-1 4G 4G-1 48 256T-1 16E 16E-1 Practice Problem 10.1 P695

  12. Address Space • Data objects and their attributes • Bytes vs. addresses • Each data object can have multiple independent addresses

  13. 10.3 VM as a Tool for Caching

  14. DRAM Disk SRAM Using Main Memory as a Cache P695

  15. 10.3.1 DRAM Cache Organization

  16. Using Main Memory as a Cache • DRAM vs. disk is more extreme than SRAM vs. DRAM • Access latencies: • DRAM ~10X slower than SRAM • Disk ~100,000X slower than DRAM • Bottom line: • Design decisions made for DRAM caches driven by enormous cost of misses

  17. Design Considerations • Line size? • Large, since disk better at transferring large blocks • Associativity? • High, to minimize miss rate • Write through or write back? • Write back, since can’t afford to perform small writes to disk Write back: defers the memory update as long as possible by writing the updated block to memory only when it is evicted from the cache by the replacement algorithm.

  18. 10.3.2 Page Tables

  19. Page • Virtual memory • Organized as an array of contiguous byte-sized cells stored on disk conceptually. • Each byte has a unique virtual address that serves as an index into the array • The contents of the array on disk are cached in main memory

  20. Page P695 • The data on disk is partitioned into blocks • Serve as the transfer units between the disk and the main memory • virtual pages (VPs) • physical pages (PPs) • Also referred to as page frames

  21. Page Attributes P695 • 1) Unallocated: • Pages that have not yet been allocated (or created) by the VM system • Do not have any data associated with them • Do not occupy any space on disk.

  22. Page Attributes • 2) Cached: • Allocated pages that are currently cached in physical memory. • 3) Uncached: • Allocated pages that are not cached in physical memory.

  23. PageFigure 10.3 P696

  24. Page Table • Each allocate page of virtual memory has entry in page table • Mapping from virtual pages to physical pages • From uncached form to cached form • Page table entry even if page not in memory • Specifies disk address • OS retrieves information

  25. Page Table “Cache” Location Data 0 243 Object Name 17 On Disk X • • • • • • 1 105 0: D: 1: J: N-1: X: Page Table

  26. Memory resident page table (physical page or disk address) Virtual Page Number Physical Memory Valid 1 1 0 1 1 1 0 1 Disk Storage (swap file or regular file system file) 0 1 Page Table

  27. 10.3.3 Page Hits

  28. Memory 0: 1: Page Table Virtual Addresses Physical Addresses 0: 1: CPU P-1: N-1: Disk Address Translation: Hardware converts virtual addresses to physical addresses via an OS-managed lookup table (page table) Page HitsFigure 10.5 P698

  29. 10.3.4 Page Faults

  30. Page Faults • Page table entry indicates virtual address not in memory • OS exception handler invoked to move data from disk into memory • current process suspends, others can resume • OS has full control over placement, etc. Suspend: 悬挂

  31. Before fault After fault Memory Memory Page Table Page Table Virtual Addresses Physical Addresses Virtual Addresses Physical Addresses CPU CPU Disk Disk Page Faults • Swapping or paging • Swapped out or paged out (from DRAM to Disk) • Demand paging (Waiting until the last moment to swap in a page, when a miss occurs) Figure 10.6 P699 Figure 10.7 P699

  32. disk Disk Servicing a Page Fault (1) Initiate Block Read • Processor Signals Controller • Read block of length P starting at disk address X and store starting at memory address Y Processor Reg Cache Memory-I/O bus I/O controller Memory disk Disk

  33. disk Disk Servicing a Page Fault Processor • Read Occurs • Direct Memory Access (DMA) • Under control of I/O controller Reg Cache Memory-I/O bus (2) DMA Transfer I/O controller Memory disk Disk

  34. disk Disk Servicing a Page Fault • I / O Controller Signals Completion • Interrupt processor • OS resumes suspended process Processor Reg (3) Read Done Cache Memory-I/O bus I/O controller Memory Resumes: 再继续,重新开始 disk Disk

  35. 10.3.5 Allocating Pages

  36. Allocating Pages P700 • The operating system allocates a new page of virtual memory, for example, as a result of calling malloc. Figure 10.8 P700

  37. 10.3.6 Locality to the Rescue Again Rescue: 解救

  38. Locality P700 • The principle of locality promises that at any point in time they will tend to work on a smaller set of active pages, known as working set or resident set. • Initial overhead where the working set is paged into memory, subsequent references to the working set result in hits, with no additional disk traffic.

  39. Locality-2 P700 • If the working set size exceeds the size of physical memory, then the program can produce an unfortunate situation known as thrashing, where the pages are swapped in and out continuously. Thrash: 鞭打

  40. 10.4 VM as a Tool for Memory Management

  41. A Tool for Memory Management • Separate virtual address space • Each process has its own virtual address space • Simplify linking, sharing, loading, and memory allocation

  42. 0 Physical Address Space (DRAM) Address Translation Virtual Address Space for Process 1: 0 VP 1 PP 2 VP 2 ... N-1 (e.g., read/only library code) PP 7 Virtual Address Space for Process 2: 0 VP 1 PP 10 VP 2 ... M-1 N-1 A Tool for Memory Management Figure 10.9 P701

  43. 10.4.1 Simplifying Linking

  44. memory invisible to user code kernel virtual memory stack %esp Memory mapped region for shared libraries Linux/x86 process memory image the “brk” ptr runtime heap (via malloc) uninitialized data (.bss) initialized data (.data) program text (.text) forbidden A Tool for Memory Management 0xc0000000 0xbfffffff 0x40000000 0x08048000 Figure 10.10 P702

  45. 10.4.2 Simplifying Sharing

  46. Simplifying Sharing • In some instances, it is desirable for processes to share code and data. • The same operating system kernel code • Make calls to routines in the standard C library • The operating system can arrange for multiple process to share a single copy of this code by mapping the appropriate virtual pages in different processes to the same physical pages

  47. 10.4.3 Simplifying Memory Allocation

  48. Simplifying Memory Allocation • A simple mechanism for allocating additional memory to user processes. • Page table work.

  49. 10.4.4 Simplifying Loading

  50. Simplifying Loading • Load executable and shared object files into memory. • Memory mapping……mmap