1 / 29

Virtual Memory:Part 2

Virtual Memory:Part 2. Kashyap Sheth Kishore Putta Bijal Shah Kshama Desai. Index. Recap Translation lookaside buffer Segmentation Segmentation with paging Working set model References. Terms & Notions. Virtual memory ( VM ) is No t a physical device but an abstract concept

ravi
Download Presentation

Virtual Memory:Part 2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Virtual Memory:Part 2 Kashyap Sheth Kishore Putta Bijal Shah Kshama Desai

  2. Index • Recap • Translation lookaside buffer • Segmentation • Segmentation with paging • Working set model • References

  3. Terms & Notions • Virtual memory (VM) is • Not a physical device but an abstract concept • Comprised of the virtual address spaces (of all processes) • Virtual address space (VAS) (of one process) • Set of visible virtual addresses • (Some systems may use a single VAS for all processes)

  4. Paging • Page: The Virtual Address Space is divided into equal number of units called a Page. Each page is of the same size. • Page frame: The Physical Address Space is divided into equal number of units called Page Frames. Each page frame is of the same size. • Memory Management Unit (MMU): it is used to map the virtual address onto the physical address.

  5. Translation Lookaside Buffer • Each virtual memory reference can cause two physical memory accesses • one to fetch the page table • one to fetch the data • To overcome this problem a high-speed cache is set up for page table entries • called the TLB - Translation Lookaside Buffer

  6. Translation Lookaside Buffer • Contains page table entries that have been most recently used • Functions same way as a memory cache

  7. Translation Lookaside Buffer • Given a virtual address, processor examines the TLB • If page table entry is present (a hit), the frame number is retrieved and the real address is formed • If page table entry is not found in the TLB (a miss), the page number is used to index the process page table

  8. Translation Lookaside Buffer • First checks if page is already in main memory • if not in main memory a page fault is issued • The TLB is updated to include the new page entry

  9. Operation of Paging and Translation Lookaside Buffer(Stallings Fig 8.8)

  10. Use of a Translation Lookaside Buffer(Stallings Fig 8.7)

  11. Segmentation • What is Segmentation? • Segmentation: Advantages • Segmentation: Disadvantages

  12. Virtual address space Call stack free Address space allocated to the parse tree Space currently being used by the parse tree Parse tree Constant table Source text Symbol table has bumped into the source text table Symbol table

  13. virtual address External fragmentation Segment # Offset Seg 1 (code) Seg 2 (data) Physical memory Seg 3 (stack) Seg 3 (stack) Virtual memory Seg 1 (code) Segment table MMU Base Limit Other offset < limit ? no STBR Seg 2 (data) yes STLR memory access fault Segment Base + Offset physical address 0x00 What is Segmentation? as in paging: valid, modified, protection, etc.

  14. Segmentation: Advantages • As opposed to paging: • No internal fragmentation (but: external fragmentation) • May save memory if segments are very small and should not be combined into one page (e.g. for reasons of protection) • Segment tables: only one entry per actual segment as opposed to one per page in VM • Average segment size >> average page sizeless overhead (smaller tables)

  15. Segmentation: Disadvantages • External fragmentation • Costly memory management algorithms • Segmentation: find free memory area big enough (search!) • Paging: keep list of free pages, any page is ok (take first!) • Segments of unequal size not suited as well for swapping

  16. Combined Segmentationand Paging (CoSP) • What is CoSP? • CoSP: Advantages • CoSP: Disadvantages

  17. Architecture For Segmentation With Paging Segment table( for process) Segment limit Page table base yes CPU Physical memory s so < so Logical address no p po Memory trap + Page table (for segment) f po f

  18. CoSP: Advantages • Reduces memory usage as opposed to pure paging • Page table size limited by segment size • Segment table has only one entry per actual segment • Simplifies handling protection and sharing of larger modules (define them as segments) • Most advantages of paging still hold • Simplifies memory allocation • Eliminates external fragmentation • Supports swapping, demand paging, prepaging etc.

  19. Process requests a 6KB address range (4KB pages) Page 1 Page 2 internal fragmentation CoSP: Disadvantages • Internal fragmentation • Yet only an average of about ½ page per contiguous address range

  20. Working Sets • Working set of pages: minimum collection of pages that must be loaded in main memory for a process to operate efficiently without unnecessary page faults. • “Smallest collection of information that must be present in main memory to assure efficient execution of the program. • Process/Working Set: two manifestations of same ongoing computational activity.

  21. Working Set Strategy • W(t,D) = set of pages in memory at time t of that process that have been referenced in the last D virtual time units. • Virtual time = time that elapses while process in execution measured in instruction steps. • Working set size: number of pages in W(t,D).

  22. Characteristics of Working Sets • Size: • Working set is non decreasing function of window (D) size. Specifically, W(t, D+1) contains W(t,D). • Prediction: Expect intuitively that immediate past page reference behavior constitutes good prediction of immediate future behavior.

  23. Detecting/Measuring W(t,D) • Hardware mechanism to record if page referenced in last D seconds. • Software: • Sample page table entries at intervals of D/K. • Any page that was referenced in these intervals is in working set.

  24. Memory Allocation • A program will not be run unless there is space in memory for its working set.

  25. Using the Working Set Concept • A strategy for resident set size: • Monitor the working set of each process • Periodically remove from the resident set of a process those pages that are not in its working set. • A process may execute only if its working set is in main memory (if resident set includes its working set).

  26. Issues With this Strategy • Past does not necessarily predict future. Size and membership of working set change over time. • A true measurement of WS for each process is impractical. Need to time stamp every page reference and keep a time-ordered queue. • Optimal value of D is unknown and would vary.

  27. Alternatively • Look at page fault rate, not exact page references. • Page fault rate falls as resident set size increases. • If page fault rate is below some threshold, give a smaller resident set size • If above some threshold, increase resident set size.

  28. References • Operating Systems,3rd edition-Gary Nutt. • Modern Operating systems,2nd edition-Tanenbaum. • World Wide Web. • Operating Systems, William Stallings.

  29. The End

More Related