1 / 22

The Operating System Perspective

The Operating System Perspective. RTE's memory management services  memory management mechanism, as viewed from the OS perspective  opposed to the programming language perspective. Virtual Memory (VM ). scheme enables each process to have its own independent view of memory

kato
Download Presentation

The Operating System Perspective

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Operating System Perspective

  2. RTE's memory management services •  memory management mechanism, as viewed from the OS perspective •  opposed to the programming language perspective

  3. Virtual Memory (VM) • scheme enables each process to have its own independent view of memory • OS (with help from the hardware) maps process memory into real memory • illusion that more virtual memory exists than physical memory

  4. Virtual Memory (VM) • process is presented with a virtual address space, 0-232 • process thinks is running alone • CPU has no notion of virtual memory • MMU (memory management unit) – translates virtual address into physical and sends it to CPU

  5. Virtual Memory (VM) • there is less RAM installed than 232 • we assume most processes will actually use less RAM than full virtual space. • when not enough memory in RAM, use disk and swap between RAM and disk • heavy penalty to go to disk to fetch data

  6. Virtual Memory (VM) • locality of reference: processes cluster memory references to small subsets of VM • virtual memory (caching) is efficient

  7. Paging - VM implementation • physical memory is broken down into pieces of small size 4K • physical memory can now be seen as an array of physical pages •  virtual address space is broken into pages of same size 4K • virtual page is mapped to physical page

  8. Finding the Page of an Address • an address addrresides: • page number addr >> log(page size) addr/ page size • offsetaddr % page size

  9. how to break down a virtual address into virtual page number and offset when using pages of 4k on a 32 bit CPU:

  10. Page Tables • translation table for each process • maps virtual address space into the physical memory • OS a translation table for each process:Page Table

  11. Page Tables • each row represents a virtual page and • column holds the physical page address in which the virtual page resides • register stores a pointer to table. • in context switch, osupdates register with current process' page table address

  12. Address lifecycle • process executes code involving memory address, addr • MMU finds the virtual page number, vpn, in which addrresides • MMU locates vpnentry in page table - retrieves physical page numberppn • MMU replaces addrby paddrpaddr = (addr % page_size) + (ppn << log(page_size)

  13. example • assume page size = 4096bytes (212 1<<12), • virtual address 0x80403040 • vpn = 0x8040 • by page table vpn->ppn=0x1020 • paddr= 0x10203040

  14. TLB and Context Switches • TLB – translation lookasidebuffer • caches recent translation made by MMU. • when MMU needs to locate a physical page for a virtual one - first checks TLB • in a context switch, TLB is irrelevant

  15. Demand Paging and Swap • process has memory size of 232 • what if each process access so much memory? • a process does not need all of its memory all of the time • swap unneeded data from memory, and restore it when needed

  16. Implementation • metadata attribute for each page: • Access rights: either read, write, execute or a combination thereof • Present bit: is the page mapped to a physical page at all? • Dirty bit: has the page been written to since loading it into physical memory?

  17. Present bit • OSevicts P from main memory - writes P  • OS sets present bit of P to 0 • if in MMU translate page is not present, MMU throws page fault interrupt - calls OS page fault handler • OS reads saved page from disk into physical page, and update page table

  18. Present bit •  lazy page allocation: •  pretend to allocate the memory: OS updates the page tables of the process • the present bit is cleared from each of these pages

  19. Dirty bit • a page needs to be evicted, but the page is not dirty. • if the page was already written to disk earlier, no need to re-copy to disk

  20. Page Replacement Policy •  The operating system must decide which page to swap out • page replacement policy: usually relies on some kind of LRU (recently used list) 

More Related