1 / 18

Introduction to Systems Programming Lecture 8

Introduction to Systems Programming Lecture 8. Paging Design. Steps in Handling a Page Fault. Virtual  Physical mapping. CPU accesses virtual address 100000 MMU looks in page table to find physical address Page table is in memory too Unreasonable overhead!.

junior
Download Presentation

Introduction to Systems Programming Lecture 8

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Systems Programming Lecture 8 Paging Design

  2. Steps in Handling a Page Fault

  3. VirtualPhysical mapping • CPU accesses virtual address 100000 • MMU looks in page table to find physical address • Page table is in memory too • Unreasonable overhead!

  4. TLB: Translation Lookaside Buffer • Idea: Keep the most frequently used parts of the page table in a cache, inside the MMU chip. • TLB holds a small number of page table entries: Usually 8 – 64 • TLB hit rate very high because, e.g., instructions fetched sequentially.

  5. A TLB to speed up paging • Example: • Code loops through pages 19,20,21 • Uses data array in pages 129,130,140 • Stack variables in pages 860,861

  6. Valid TLB Entries • TLB miss: • Do regular page lookup • Evict a TLB entry and store the new TLB entry • Miniature paging system, done in hardware • When OS does context switch to a new process, all TLB entries become invalid: • Early instructions of new process will cause TLB misses.

  7. TLB placement/eviction • Done by hardware • Placement rule: • TLBIndex = VirtualAddr modulo TLBSize • TLBSize is always 2k  TLBIndex = k least-significant bits • Keep “tag” (rest of bits) to fully identify virtual addr • Virtual address can be only in one TLB index • No explicit “eviction”: simply overwrite what is in TLB[TLBIndex]

  8. TLB + Page table lookup Virtual address In pagetable? Page fault:copy fromdisk to memory In TLB? No No Yes; update TLB Yes Physical address

  9. TLB – cont. • If address is in TLB  page is in physical memory • OS invalidates TLB entry when evicting a page • So page fault not possible if we have a TLB hit • “page fault rate” is computed only on TLB misses

  10. TLB lookup: 4ns Phys mem access: 10ns Disk access: 10ms TLB miss rate: 1% Page fault rate: 0.1% Example: Average memory access time • Assume page table is in memory. TLB hit p=0.99, time=4ns+10ns Page hit: p=0.01*0.999, time=4ns+10ns+10ns TLB miss Page fault: p=0.01*0.001, time=4ns+10ns+10ms+10ns Average memory access: 114.1ns (1.141*10-7)

  11. Design issues in Paging

  12. Local versus Global Allocation Policies:Physical Memory • Original configuration – ‘A’ causes page fault • Local page replacement • Global page replacement

  13. Local or Global? • Local  number of frames per process is fixed • If working set grows  thrashing • If working set shrinks  waste • Global usually better • Some algorithms can only be local (working set, WSClock).

  14. How many frames to give a process? • Fixed number • Proportional to its size (before load) • Zero, let it issue page faults for all its pages. • This is called pure demand paging. • Monitor page-fault-frequency (PFF), give more pages if PFF high.

  15. Page fault rate as a function of the number of page frames assigned

  16. Load Control • Despite good designs, system may still thrash • When PFF algorithm indicates • some processes need more memory • but no processes need less • Solution: Reduce number of processes competing for memory • swap one or more to disk, divide up frames they held • reconsider degree of multiprogramming

  17. Cleaning Policy • Need for a background process, paging daemon • periodically inspects state of memory • When too few frames are free • selects pages to evict using a replacement algorithm • It can use same circular list (clock) • as regular page replacement algorithm but with diff ptr

  18. Windows XP Page Replacement • Processes are assigned working set minimum and working set maximum • Working set minimum is the minimum number of page frames the process is guaranteed to have in memory • A process may be assigned as many page frames up to its working set maximum • When the amount of free memory in the system falls below a threshold, automaticworking set trimming is performed to restore the amount of free memory • Working set trimming removes frames from processes that have more than their working set minimum

More Related