1 / 25

Virtual Memory

Virtual Memory. Virtual Memory: Motivation. Historically, there were two major motivations for virtual memory: to allow efficient and safe sharing of memory among multiple programs, and to remove the programming burden of a small, limited amount of main memory. Patt&Henn 04

fausto
Download Presentation

Virtual Memory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Virtual Memory \course\cpeg323-08F\Topic7e

  2. Virtual Memory: Motivation Historically, there were two major motivations for virtual memory: to allow efficient and safe sharing of memory among multiple programs, and to remove the programming burden of a small, limited amount of main memory. Patt&Henn 04 …a system has been devised to make the core drum combination appear to programmer as a single level store, the requisite transfers taking place automatically Kilbum et al. \course\cpeg323-08F\Topic7e

  3. MAIN PROCESSOR MEMORY MANAGE- MENT UNIT HIGH- SPEED CACHE MAIN MEMORY BACKING STORE Purpose of Virtual Memory • Provide sharing • Automatically manage the M hierarchy (as “one-level”) • Simplify loading (for relocation) DATA CONTROL LOGICAL ADDRESS PHYSICAL ADDRESS \course\cpeg323-08F\Topic7e

  4. Virtual Address Page fault Using elaborate Software page fault Handling algorithm Address Translator Physical Address Structure of Virtual Memory From Processor To Memory \course\cpeg323-08F\Topic7e

  5. A Paging System 64K virtual address space 32K main memory Main memory address Virtual address } 4K }4K \course\cpeg323-08F\Topic7e

  6. Page Table Page frame Virtual page Main memory Page frame 1 = present in main memory, 0 = not present in main memory \course\cpeg323-08F\Topic7e

  7. Address Translation See P&H Fig. 7.19 3rd Ed or 5.19 4th Ed In Virtual Memory, blocks of memory (called pages) are mapped from one set of address (called virtual addresses) to another set (called physical addresses) \course\cpeg323-08F\Topic7e

  8. Page Faults See P&H Fig. 7.22 3rd Ed or 5.22 4th Ed If the valid bit for a virtual page is off, a page fault occurs. The operating system must be given control. Once the operating system gets control, it must find the page in the next level of the hierarchy (usually magnetic disk) and decide where to place the requested page in main memory. \course\cpeg323-08F\Topic7e

  9. PAGE MAP Virtual Address Mapping VIRTUAL ADDRESS Address within Page Page Number Displacement Base Address of Page PAGE (in Memory) \course\cpeg323-08F\Topic7e

  10. Terminology • Page • Page fault • Virtual address • Physical address • Memory mapping or address translation \course\cpeg323-08F\Topic7e

  11. VM Simplifies Loading • VM provide relocation function. • Address mapping allows programs to be load in any location in Physical Memory • Under VM relocation does not need special OS + hardware support as in the past \course\cpeg323-08F\Topic7e

  12. Address Translation Consideration • Direct mapping using register sets. • Indirect mapping using tables. • Associative mapping of frequently used pages. \course\cpeg323-08F\Topic7e

  13. What happens on a write ? • Write-through to secondary storage is impractical for VM. • Write-back is used: • Advantages (reduce number of writes to disk, amortize the cost). • Dirty-bit. \course\cpeg323-08F\Topic7e

  14. An Example Case 1 VM page size 512 VM address space 64k Total virtual page = 64k/512 = 128 pages \course\cpeg323-08F\Topic7e

  15. An Example (con’t) Case 2 VM page size 512 VM address space 4G= 232 Total virtual page = = 8Mpages If each PTE has 13 bits: so total PT size (bytes) ≈ 8M x 4 = 32M bytes Note : assuming Main Memory has 4M byte or = = = 213 frames 4G 512 4M 512 222 29 \course\cpeg323-08F\Topic7e

  16. An Example (con’t) How about VM address space =252 (R-6000) (4 Petabytes) page size 4K bytes so total number of virtual pages: 252 212 = 240 \course\cpeg323-08F\Topic7e

  17. Techniques for Reducing PT Size • Set a lower limit, and permit dynamic growth. • Permit growth from both directions. • Inverted page table (a hash table). • Multi-Level page table (segments and pages). • PT itself can be paged: I.e. put PT itself in virtual address space (Note: some small portion of pages should be in main memory and never paged out). \course\cpeg323-08F\Topic7e

  18. Address within Page 11 bits 11 bits 10 bits Segment Number Page Number Displacement Base of Segment Table 0 1 2047 SEGMENT TABLE Base Address of Page Table 0 1 2047 PAGE TABLE Base Address of Page Base + 0 Base + 1 Base + 1023 PAGE (in Memory) Two level Address mapping \course\cpeg323-08F\Topic7e

  19. VM: Implementation Issues • Page faults handling. • Translation lookahead buffer (TLB) • Protection issues \course\cpeg323-08F\Topic7e

  20. Page Fault Handling • When a virtual page number is not in TLB, then PT in M is accessed (through PTBR) to find the PTE • If PTE indicates that the page is missing a page fault occurs • Context switch! \course\cpeg323-08F\Topic7e

  21. Making Address translation fast See P&H Fig. 7.23 3rd Ed or 5.23 4th Ed The TLB acts as a cache on the page table for the entries that map to physical pages only \course\cpeg323-08F\Topic7e

  22. Typical values for a TLB in 2008 See P&H Fig. 5.29 4th Ed Although the range of values is wide, this is partially because many of the values that have shifted over time are related; for example, as caches become larger to overcome larger miss penalties, block sizes also grow. \course\cpeg323-08F\Topic7e

  23. TLB Design • Placement policy: • Small TLBs: full-associativity can be used • Large TLBs: fully-associativity may be too slow • Replacement policy: sometime even random policy is used for speed/simplicity \course\cpeg323-08F\Topic7e

  24. Example: FasthMATH See P&H Fig. 7.25 3rd Ed or 5.25 4th Ed Processing a read or a write-through in the IntrinsityFastMATHTLB and cache \course\cpeg323-08F\Topic7e

  25. Integrating VM, TLBs and Caches • The TLB and cache implement the process of going from a virtual address to a data item in the IntrinsityFast MATH. • This figure shows the organization of the TLB and the data cache, assuming a 4 kB page size. • This diagram focuses on a read. See P&H Fig. 7.24 3rd Ed or 5.24 4th Ed \course\cpeg323-08F\Topic7e

More Related