1 / 34

Paging (Review and Continuation)

Paging (Review and Continuation). CS-502, Operating Systems Fall 2009 (EMC) (Slides include materials from Modern Operating Systems , 3 rd ed., by Andrew Tanenbaum and from Operating System Concepts , 7 th ed., by Silbershatz, Galvin, & Gagne). Review. Virtual Address

plincoln
Download Presentation

Paging (Review and Continuation)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Paging(Review and Continuation) CS-502, Operating SystemsFall 2009 (EMC) (Slides include materials from Modern Operating Systems, 3rd ed., by Andrew Tanenbaum and from Operating System Concepts, 7th ed., by Silbershatz, Galvin, & Gagne) Paging

  2. Review • Virtual Address • Address space in which the process “thinks” • Each virtual address is translated “on the fly” to a physical address • Fragmentation • Internal fragmentation – unused within an allocation • External fragmentation – unused between allocations Paging

  3. Logical Address Space (virtual memory) physical memory page 0 frame 0 page 1 frame 1 page 2 frame 2 MMU page 3 … … frame Y page X Paging • Use small fixed size units in both physical and virtual memory • Provide sufficient MMU hardware to allow page units to be scattered across memory • Make it possible to leave infrequently used parts of virtual address space out of physical memory • Solve internal & external fragmentation problems Paging

  4. Paging Translation logical address virtual page # offset physical memory page frame 0 page table page frame 1 physical address page frame 2 page frame # F(PFN) offset page frame 3 … page frame Y Paging

  5. 1 1 1 2 20 V R M prot page frame number Generic PTE Structure • Valid bit gives state of this Page Table Entry (PTE) • says whether or not its virtual address is valid – in memory and VA range • If not set, page might not be in memory or may not even exist! • Reference bit says whether the page has been accessed • it is set by hardware whenever a page has been read or written to • Modify bit says whether or not the page is dirty • it is set by hardware during every write to the page • Protection bits control which operations are allowed • read, write, execute, etc. • Page frame number (PFN) determines the physical page • physical page start address • Other bits dependent upon machine architecture Paging

  6. Steps in Handling a Page Fault Paging

  7. Definitions and Observations • Recurring themes in paging • Temporal Locality – locations referenced recently tend to be referenced again soon • Spatial Locality – locations near recent references tend to be referenced soon • Definitions • Working set: The set of pages that a process needs to run without frequent page faults • Thrashing: Excessive page faulting due to insufficient frames to support working set Paging

  8. Paging Issues • #1 — Page Tables can consume large amounts of space • If PTE is 4 bytes, and use 4KB pages, and have 32 bit VA space  4MB for each process’s page table • Stored in main memory! • What happens for 64-bit logical address spaces? • #2 — Performance Impact • Converting virtual to physical address requires multiple operations to access memory • Read Page Table Entry from memory! • Get page frame number • Construct physical address • Assorted protection and valid checks • Without fast hardware support, requires multiple memory accesses and a lot of work per logical address Paging

  9. Issue #1: Page Table Size • Process Virtual Address spaces • Not usually full – don’t need every PTE • Processes do exhibit locality – only need a subset of the PTEs • Two-level page tables • I.e., Virtual Addresses have 3 parts • Master page number – points to secondary page table • Secondary page number – points to PTE containing page frame # • Offset • Physical Address = offset + startof (PFN) Paging

  10. Two-level page tables 12 bits 8 bits 12 bits virtual address master page # secondary page# offset physical memory page frame 0 master page table physical address page frame 1 page frame # offset secondary page table # secondary page table # addr page frame 2 page frame 3 page frame number … n-level page tables common in 64-bit systems page frame Y Paging

  11. Paging Issues • Minor issue – internal fragmentation • Memory allocation in units of pages (also file allocation!) • #1 — Page Tables can consume large amounts of space • If PTE is 4 bytes, and use 4KB pages, and have 32 bit VA space -> 4MB for each process’s page table • What happens for 64-bit logical address spaces? • #2 — Performance Impact • Converting virtual to physical address requires multiple operations to access memory • Read Page Table Entry from memory! • Get page frame number • Construct physical address • Assorted protection and valid checks • Without fast hardware, requires multiple memory accesses and a lot of work per logical address Paging

  12. VPN # PTE Solution — Associative Memoryaka Dynamic Address Translation — DATaka Translation Lookaside Buffer — TLB • Do fast hardware search of all entries in parallel for VPN • If present, use page frame number from PTE directly • If not, • Look up in page table (multiple accesses) • Load VPN and PTE into Associative Memory (throwing out another entry as needed) Paging

  13. Translation Lookaside Buffer (TLB) • Associative memory implementation in hardware • Translates VPN to PTE (containing PFN) • Done in single machine cycle • TLB is hardware assist • Fully or partially associative – entries searched in parallel with VPN as index • Returns page frame number (PFN) • MMU use PFN and offset to get Physical Address • Locality makes TLBs work • Usually have 8–1024 TLB entries • Sufficient to deliver 99%+ hit rate (in most cases) • Works well with multi-level page tables Paging

  14. MMU with TLB Paging

  15. Memory Memory Memory I/O device I/O device I/O device I/O device Typical Machine Architecture CPU MMU and TLBlive here Bridge Graphics Paging

  16. TLB-related Policies • OS must ensure that TLB and page tables are consistent • When OS changes bits (e.g. protection) in PTE, it must invalidate TLB copy • Dirty bit and reference bits in TLB must be written back to PTE • TLB replacement policies • Random • Least Recently Used (LRU) – with hardware help • What happens on context switch? • Each process has own page tables (possibly multi-level) • Must invalidate all TLB entries • Then TLB fills up again as new process executes • Expensive context switches just got more expensive! • Note benefit of Threads sharing same address space Paging

  17. Questions? Paging

  18. Alternative Page Table format – Hashed • Common in address spaces > 32 bits. • The virtual page number is hashed into a page table. • Each entry contains a chain of VPN’s with same hash. • Search chain for match • If found, use associated PTE • Still needs TLB for performance Paging

  19. Hashed Page Tables Advantage:– shorter, denserpage tables Disadvantage:– TLB faultstrap to software for loading(implies larger TLB) Paging

  20. Alternative Page Table format – Inverted • One entry for each real page of memory. • Entry consists of virtual address of page stored at that real memory location, with information about the process that owns that page. • Decreases memory needed to store each page table, but increases time needed to search the table when a page reference occurs. • Use hash table to limit the search to one or a few page-table entries. • Still uses TLB to speed up execution Paging

  21. Inverted Page Table Architecture Advantage:– smallest possiblepage tables; all processes share one set of page tables Disadvantage:– same as hashedpage tables See also Tanenbaum, Fig 3-14 Paging

  22. Inverted Page Table Architecture (cont’d) • Apollo Domain systems • Common in 64-bit address systems • Supported by TLB • Next topic helps to quantify size of TLB Paging

  23. Paging – Some Tricks • Shared Memory • Part of virtual memory of two or more processes map to same frames • Finer grained sharing than segments • Data sharing with Read/Write • Shared libraries with eXecute • Each process has own PTEs – different privileges Paging

  24. Shared Pages (illustration) Paging

  25. Paging trick – Copy-on-write (COW) • During fork() • Don’t copy individual pages of virtual memory • Just copy page tables; all pages start out shared • Set all PTEs to read-only in both processes • When either process hits a write-protection fault on a valid page • Make a copy of that page, set write protect bit in PTE to allow writing • Resume faulting process • Saves a lot of unnecessary copying • Especially if child process will delete and reload all of virtual memory with call to exec() Paging

  26. Definition — Mapping • Mapping (a part of) virtual memory to a file = • Connecting blocks of the file to the virtual memory pages so that • Page faults in (that part of) virtual memory resolve to the blocks of the file • Dirty pages in (that part of) virtual memory are written back to the blocks of the file Paging

  27. Hard to program; even Harder to manage performance Warning — Memory-mapped Files • Tempting idea:– • Instead of standard I/O calls (read(), write(), etc.) map file starting at virtual address X • Access to “X + n” refers to file at offset n • Use paging mechanism to read file blocks into physical memory on demand … • … and write them out as needed • Has been tried many times • Rarely works well in practice Paging

  28. 0xFFFFFFFF stack (dynamically allocated) SP heap (dynamically allocated) Virtual address space static data program code (text) PC 0x00000000 Virtual Memory Organization From an earlier topic Paging

  29. Unmapped, may be dynamically mapped Map to writable swap file Map to writable swap file Map to writable swap file Mapped to read-only executable file Virtual Memory Organization stack (dynamically allocated) SP guard page (never mapped) heap (dynamically allocated) static data program code & constants PC Paging

  30. Map to writable swap file Map to writable swap file Mapped to read-only executable file Virtual Memory Organization (continued) thread 1 stack SP (T1) guard page Map to writable swap file thread 2 stack SP (T2) guard page thread 3 stack SP (T3) Unmapped, may be dynamically mapped guard page (never mapped) heap static data PC (T2) program code & constants PC (T3) Paging

  31. Virtual Memory Organization (continued) • Each area of process VM is its own segment (or sequence of segments) • Executable code & constants – fixed size • Shared libraries • Static data – fixed size • Heap – open-ended • Stacks – each open-ended • Other segments as needed • E.g., symbol tables, loader information, etc. Paging

  32. Virtual Memory – a Major OS Abstraction • Processes execute in virtual memory, not physical memory • Virtual memory may be very large • Much larger than physical memory • Virtual memory may be organized in interesting & useful ways • Open-ended segments • Virtual memory is where the action is! • Physical memory is just a cache of virtual memory Paging

  33. Reading Assignment • Tanenbaum • §§ 3.1-3.2 (previous topic – Memory Management) • § 3.3 (this topic on Paging) Paging

  34. Questions? Paging

More Related