1 / 85

CS 241 Section Week #9 (04/09/09)

CS 241 Section Week #9 (04/09/09). Topics. L MP2 Overview Memory Management Virtual Memory Page Tables. LMP2 Overview. LMP2 Overview. L MP2 attempts to encode or decode a number of files the following way: encode: %> ./mmap -e -b16 file1 [file2 ...]

matthew
Download Presentation

CS 241 Section Week #9 (04/09/09)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 241 Section Week #9(04/09/09)

  2. Topics • LMP2 Overview • Memory Management • Virtual Memory • Page Tables

  3. LMP2 Overview

  4. LMP2 Overview • LMP2 attempts to encode or decode a number of files the following way: • encode: %> ./mmap -e -b16 file1 [file2 ...] • decode: %> ./mmap -d -b8 file1 [file2 ...] • It has the following parameters: • It reads whether it has to encode (‘-e’) or decode(‘-d’); • the number of bytes (rw_units) for each read/write from the file;

  5. LMP1 Overview • You have TWO weeks to complete and submit LMP2. We have divided LMP2 into two stages: • Stage 1: • Implement a simple virtual memory. • It is recommended you implement the my_mmap() function during this week. • You will need to complete various data structures to deal with the file mapping table, the page table, the physical memory, etc.

  6. LMP1 Overview • You have TWO weeks to complete and submit LMP2. We have divided LMP2 into two stages: • Stage 2 • Implement various functions for memory mapped files including: • my_mread() , my_mwrite() and my_munmap() • Handle page faults in your my_mread() and my_mwrite() functions • Implement two simple manipulations on files: • encoding • decoding

  7. Memory Management

  8. Memory • Contiguous allocation and compaction • Paging and page replacement algorithms

  9. Fragmentation External Fragmentation Free space becomes divided into many small pieces Caused over time by allocating and freeing the storage of different sizes Internal Fragmentation Result of reserving space without ever using its part Caused by allocating fixed size of storage

  10. Contiguous Allocation • Memory is allocated in monolithic segments or blocks • Public enemy #1: external fragmentation • We can solve this by periodically rearranging the contents of memory

  11. Storage Placement Algorithms Best Fit Produces the smallest leftover hole Creates small holes that cannot be used

  12. Storage Placement Algorithms Best Fit Produces the smallest leftover hole Creates small holes that cannot be used First Fit Creates average size holes

  13. Storage Placement Algorithms Best Fit Produces the smallest leftover hole Creates small holes that cannot be used First Fit Creates average size holes Worst Fit Produces the largest leftover hole Difficult to run large programs

  14. Storage Placement Algorithms Best Fit Produces the smallest leftover hole Creates small holes that cannot be used First Fit Creates average size holes Worst Fit Produces the largest leftover hole Difficult to run large programs First-Fit and Best-Fit are better than Worst-Fit in terms of SPEED and STORAGE UTILIZATION

  15. Exercise Consider a swapping system in which memory consists of the following hole sizes in memory order: 10KB, 4KB, 20KB, 18KB, 7KB, 9KB, 12KB, and 15KB. Which hole is taken for successive segment requests of (a) 12KB, (b) 10KB, (c) 9KB for First Fit?

  16. Exercise Consider a swapping system in which memory consists of the following hole sizes in memory order: 10KB, 4KB, 20KB, 18KB, 7KB, 9KB, 12KB, and 15KB. Which hole is taken for successive segment requests of (a) 12KB, (b) 10KB, (c) 9KB for First Fit? 20KB, 10KB and 18KB

  17. Exercise Consider a swapping system in which memory consists of the following hole sizes in memory order: 10KB, 4KB, 20KB, 18KB, 7KB, 9KB, 12KB, and 15KB. Which hole is taken for successive segment requests of (a) 12KB, (b) 10KB, (c) 9KB for First Fit? 20KB, 10KB and 18KB Best Fit?

  18. Exercise Consider a swapping system in which memory consists of the following hole sizes in memory order: 10KB, 4KB, 20KB, 18KB, 7KB, 9KB, 12KB, and 15KB. Which hole is taken for successive segment requests of (a) 12KB, (b) 10KB, (c) 9KB for First Fit? 20KB, 10KB and 18KB Best Fit? 12KB, 10KB and 9KB

  19. Exercise Consider a swapping system in which memory consists of the following hole sizes in memory order: 10KB, 4KB, 20KB, 18KB, 7KB, 9KB, 12KB, and 15KB. Which hole is taken for successive segment requests of (a) 12KB, (b) 10KB, (c) 9KB for First Fit? 20KB, 10KB and 18KB Best Fit? 12KB, 10KB and 9KB Worst Fit?

  20. Exercise Consider a swapping system in which memory consists of the following hole sizes in memory order: 10KB, 4KB, 20KB, 18KB, 7KB, 9KB, 12KB, and 15KB. Which hole is taken for successive segment requests of (a) 12KB, (b) 10KB, (c) 9KB for First Fit? 20KB, 10KB and 18KB Best Fit? 12KB, 10KB and 9KB Worst Fit? 20KB, 18KB and 15KB

  21. malloc Revisited Free storage is kept as a list of free blocks Each block contains a size, a pointer to the next block, and the space itself

  22. malloc Revisited Free storage is kept as a list of free blocks Each block contains a size, a pointer to the next block, and the space itself When a request for space is made, the free list is scanned until a big-enough block can be found Which storage placement algorithm is used?

  23. malloc Revisited Free storage is kept as a list of free blocks Each block contains a size, a pointer to the next block, and the space itself When a request for space is made, the free list is scanned until a big-enough block can be found Which storage placement algorithm is used? If the block is found, return it and adjust the free list. Otherwise, another large chunk is obtained from the OS and linked into the free list

  24. malloc Revisited (continued) typedef long Align; /* for alignment to long */ union header { /* block header */ struct { union header *ptr; /* next block if on free list */ unsigned size; /* size of this block */ } s; Align x; /* force alignment of blocks */ }; typedef union header Header; points to next free block size

  25. Compaction • After numerous malloc() and free() calls, our memory will have many holes • Total free memory is much greater than that of any contiguous chunk • We can compact our allocated memory • Shift all allocations to one end of memory, and all holes to the other end • Temporarily eliminates of external fragmentation

  26. Compaction (example) • Lucky that A fit in there! To be sure that there is enough space, we may want to compact at (d), (e), or (f) • Unfortunately, compaction is problematic • It is very costly. How much, exactly? • How else can we eliminate external fragmentation?

  27. Paging • Divide memory into pages of equal size • We don’t need to assign contiguous chunks • Internal fragmentation can only occur on the last page assigned to a process • External fragmentation cannot occur at all • Need to map contiguous logical memory addresses to disjoint pages

  28. Page Replacement • We may not have enough space in physical memory for all pages of every process at the same time. • But which pages shall we keep? • Use the history of page accesses to decide • Also useful to know the dirty pages

  29. Page Replacement Strategies • It takes two disk operations to replace a dirty page, so: • Keep track of dirty bits, attempt to replace clean pages first • Write dirty pages to disk during idle disk time • We try to approximate the optimal strategy but can seldom achieve it, because we don’t know what order a process will use its pages. • Best we can do is run a program multiple times, and track which pages it accesses

  30. Page Replacement Algorithms • Optimal: last page to be used in the future is removed first • FIFO: First in First Out • Based on time the page has spent in main memory • LRU: Least Recently Used • Locality of reference principle again • MRU: most recently used = removed first • When would this be useful? • LFU: Least Frequently Used • Replace the page that is used least often

  31. Example • Physical memory size: 4 pages • Pages are loaded on demand • Access history:0 1 2 3 4 0 1 2 3 4 … • Which algorithm does best here? • Access history:0 1 2 3 4 4 3 2 1 0 … • And here?

  32. Virtual Memory

  33. Why Virtual Memory? • Use main memory as a Cache for the Disk • Address space of a process can exceed physical memory size • Sum of address spaces of multiple processes can exceed physical memory

  34. Why Virtual Memory? • Use main memory as a Cache for the Disk • Address space of a process can exceed physical memory size • Sum of address spaces of multiple processes can exceed physical memory • Simplify Memory Management • Multiple processes resident in main memory. • Each process with its own address space • Only “active” code and data is actually in memory

  35. Why Virtual Memory? • Use main memory as a Cache for the Disk • Address space of a process can exceed physical memory size • Sum of address spaces of multiple processes can exceed physical memory • Simplify Memory Management • Multiple processes resident in main memory. • Each process with its own address space • Only “active” code and data is actually in memory • Provide Protection • One process can’t interfere with another. • because they operate in different address spaces. • User process cannot access privileged information • different sections of address spaces have different permissions.

  36. Principle of Locality • Program and data references within a process tend to cluster

  37. Principle of Locality • Program and data references within a process tend to cluster • Only a few pieces of a process will be needed over a short period of time (active data or code)

  38. Principle of Locality • Program and data references within a process tend to cluster • Only a few pieces of a process will be needed over a short period of time (active data or code) • Possible to make intelligent guesses about which pieces will be needed in the future

  39. Principle of Locality • Program and data references within a process tend to cluster • Only a few pieces of a process will be needed over a short period of time (active data or code) • Possible to make intelligent guesses about which pieces will be needed in the future • This suggests that virtual memory may work efficiently

  40. n–1 p p–1 0 virtual address virtual page number page offset address translation m–1 p p–1 0 physical address physical page number page offset Page offset bits don’t change as a result of translation VM Address Translation • Parameters • P = 2p = page size (bytes). • N = 2n = Virtual address limit • M = 2m = Physical address limit

  41. Page Table • Keeps track of what pages are in memory

  42. Page Table • Keeps track of what pages are in memory • Provides a mapping from virtual address to physical address

  43. Handling a Page Fault • Page fault • Look for an empty page in RAM • May need to write a page to disk and free it

  44. Handling a Page Fault • Page fault • Look for an empty page in RAM • May need to write a page to disk and free it • Load the faulted page into that empty page

  45. Handling a Page Fault • Page fault • Look for an empty page in RAM • May need to write a page to disk and free it • Load the faulted page into that empty page • Modify the page table

  46. Addressing • 64MB RAM (2^26)

  47. Addressing • 64MB RAM (2^26) • 2^31 (2GB) total memory Virtual Address (31 bits)

  48. Addressing • 64MB RAM (2^26) • 2^31 (2GB) total memory • 4KB page size (2^12) Virtual Address (31 bits)

  49. Addressing • 64MB RAM (2^26) • 2^31 (2GB) total memory • 4KB page size (2^12) • So we need 2^12 for the offset, we can use the remainder bits for the page Virtual Address (31 bits) Virtual Page number (19 bits) Page offset (12 bits)

  50. Addressing • 64MB RAM (2^26) • 2^31 (2GB) total memory • 4KB page size (2^12) • So we need 2^12 for the offset, we can use the remainder bits for the page • 19 bits, we have 2^19 pages (524288 pages) Virtual Address (31 bits) Virtual Page number (19 bits) Page offset (12 bits)

More Related