1 / 55

“ Memory Management Function ”

“ Memory Management Function ”. Presented By Lect. Rimple Bala GPC,Amritsar. Contents. Basic memory management Swapping Virtual memory Paging Page replacement algorithms Modeling page replacement algorithms Design issues for paging systems

Download Presentation

“ Memory Management Function ”

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. “Memory Management Function” Presented By Lect. Rimple Bala GPC,Amritsar

  2. Contents • Basic memory management • Swapping • Virtual memory • Paging • Page replacement algorithms • Modeling page replacement algorithms • Design issues for paging systems • Implementation issues • Segmentation Memory Management

  3. Memory Management • Memory – a linear array of bytes • Holds O.S. and programs (processes) • Each cell (byte) is named by a unique memory address • Recall, processes are defined by an address space, consisting of text, data, and stack regions • Process execution • CPU fetches instructions from the text region according to the value of the program counter (PC) • Each instruction may request additional operands from the data or stack region

  4. Memory Management • Ideally programmers want memory that is • large • fast • non volatile • Memory hierarchy • small amount of fast, expensive memory – cache • some medium-speed, medium price main memory • gigabytes of slow, cheap disk storage • Memory manager handles the memory hierarchy

  5. Multiprogramming with Fixed Partitions • Fixed memory partitions • separate input queues for each partition • single input queue

  6. Modeling Multiprogramming CPU utilization as a function of number of processes in memory Degree of multiprogramming

  7. Names and Binding • Symbolic names  Logical names  Physical names • Symbolic Names: known in a context or path • file names, program names, printer/device names, user names • Logical Names: used to label a specific entity • job number, major/minor device numbers, process id (pid), uid. • Physical Names: address of entity • inode address on disk or memory • entry point or variable address • PCB address

  8. Address Binding • Address binding • fixing a physical address to the logical address of a process’ address space • Compile time binding • if program location is fixed and known ahead of time • Load time binding • if program location in memory is unknown until run-time AND location is fixed • Execution time binding • if processes can be moved in memory during execution • Requires hardware support

  9. 1000 1100 1175 Library Routines 0 100 175 Library Routines Compile Time Address Binding P: : push ... jmp 1175 : foo: ... P: : push ... jmp 175 : foo: ... Load Time Address Binding Execution Time Address Binding 1000 1100 1175 Library Routines 0 100 175 Library Routines P: : push ... jmp 1175 : foo: ... P: : push ... jmp 175 : foo: ... Base register 1000

  10. Dynamic relocation with a base register • Memory Management Unit (MMU) - dynamically converts logical addresses into physical address • MMU contains base address register for running process Relocation register for process i Max Mem 1000 Max addr process i 0 Program generated address + Physical memory address MMU Operating system 0

  11. Protection using base & limit registers • Memory protection • Base register gives starting address for process • Limit register limits the offset accessible from the relocation register limit base register register Physical memory logical address address yes + < no addressing error

  12. Logical vs. Physical Address Space • The concept of a logical address space that is bound to a separate physical address space is central to proper memory management. • Logical Address: or virtual address - generated by CPU • Physical Address: address seen by memory unit. • Logical and physical addresses are the same in compile time and load-time binding schemes • Logical and physical addresses differ in execution-time address-binding scheme. Edusat lect. Rimple GPC,Amritsar

  13. Features of Memory Management • Relocation 1. Static 2. Dynamic • Protection • Sharing • Logical organization • Physical organization • Memory Compaction

  14. Various Methods used by OS • Swapping • Virtual Memory • Paging • Segmentation

  15. Swapping Memory allocation changes as • processes come into memory • leave memory Shaded regions are unused memory

  16. Swapping • Allocating space for growing data segment • Allocating space for growing stack & data segment

  17. Memory Management with Bit Maps • Part of memory with 5 processes, 3 holes • tick marks show allocation units • shaded regions are free • Corresponding bit map • Same information as a list

  18. Memory Management with Linked Lists Four neighbor combinations for the terminating process X

  19. Managing memory with linked lists Searching the list for space for a new process • First Fit • Next Fit • Start from current location in the list • Best Fit • Find the smallest hole that will work • Tends to create lots of really small holes • Worst Fit • Find the largest hole • Remainder will be big • Quick Fit • Keep separate lists for common sizes

  20. Virtual Memory • Virtual Memory • Separation of user logical memory from physical memory. • Only PART of the program needs to be in memory for execution. • Logical address space can be much larger than physical address space. • Need to allow pages to be swapped in and out. • Virtual Memory can be implemented via • Paging • Segmentation

  21. Virtual Memory The position and function of the MMU

  22. Paging The relation betweenvirtual addressesand physical memory addresses given bypage table

  23. Page Tables Internal operation of MMU with 16 4 KB pages

  24. Page Tables • 32 bit address with 2 page table fields • Two-level page tables Top-level page table

  25. Page Tables Typical page table entry

  26. TLBs – Translation Look aside Buffers A TLB to speed up paging

  27. Inverted Page Tables Comparison of a traditional page table with an inverted page table

  28. Page Replacement Algorithms • Page fault forces choice • which page must be removed • make room for incoming page • Modified page must first be saved • unmodified just overwritten • Better not to choose an often used page • will probably need to be brought back in soon

  29. Page Replacement Algorithms • Optimal Page Replacement • Not Recently Used Page Replacement Algorithms. • FIFO Page Replacement Algorithms • Second Chance Algorithms • Clock Page Replacement Algorithms • Least Recently used

  30. Optimal Page Replacement Algorithm • Replace page needed at the farthest point in future • Optimal but unrealizable • Estimate by … • logging page use on previous runs of process • although this is impractical

  31. Not Recently Used Page Replacement Algorithm • Each page has Reference bit, Modified bit • bits are set when page is referenced, modified • Pages are classified • not referenced, not modified • not referenced, modified • referenced, not modified • referenced, modified • NRU removes page at random • from lowest numbered non empty class

  32. FIFO Page Replacement Algorithm • Maintain a linked list of all pages • in order they came into memory • Page at beginning of list replaced • Disadvantage • page in memory the longest may be often used

  33. Second Chance Page Replacement Algorithm • Operation of a second chance • pages sorted in FIFO order • Page list if fault occurs at time 20, A has R bit set

  34. The Clock Page Replacement Algorithm

  35. Least Recently Used (LRU) • Assume pages used recently will used again soon • throw out page that has been unused for longest time • Must keep a linked list of pages • most recently used at front, least at rear • update this list every memory reference !! • Alternatively keep counter in each page table entry • choose page with lowest value counter • periodically zero the counter

  36. Review of Page Replacement Algorithms

  37. Modeling Page Replacement Algorithms Belady's Anomaly • FIFO with 3 page frames • FIFO with 4 page frames • P's show which page references show page faults

  38. Stack Algorithms State of memory array, M, after each item in reference string is processed 7 4 6 5

  39. Page Size Small page size • Advantages • less internal fragmentation • better fit for various data structures, code sections • less unused program in memory • Disadvantages • programs need many pages, larger page tables

  40. page table space internal fragmentation Optimized when Page Size • Overhead due to page table and internal fragmentation • Where • s = average process size in bytes • p = page size in bytes • e = page entry

  41. Shared Pages Two processes sharing same program sharing its page table

  42. Implementation Issues Four times when OS involved with paging • Process creation • determine program size • create page table • Process execution • MMU reset for new process • TLB flushed • Page fault time • determine virtual address causing fault • swap target page out, needed page in • Process termination time • release page table, pages

  43. Backing Store (a) Paging to static swap area (b) Backing up pages dynamically

  44. Separation of Policy and Mechanism Page fault handling with an external pager

  45. Segmentation • One-dimensional address space with growing tables • One table may bump into another

  46. Segmentation Allows each table to grow or shrink, independently

  47. Implementation of Pure Segmentation

  48. Segmentation with Paging: MULTICS A 34-bit MULTICS virtual address

  49. Segmentation with Paging: MULTICS Conversion of a 2-part MULTICS address into a main memory address

  50. Segmentation with Paging: MULTICS

More Related