memory management n.
Skip this Video
Download Presentation
Memory Management

Loading in 2 Seconds...

play fullscreen
1 / 35

Memory Management - PowerPoint PPT Presentation

  • Uploaded on

Memory Management. 4.1 Basic memory management 4.2 Swapping 4.3 Virtual memory 4.4 Page replacement algorithms 4.5 Modeling page replacement algorithms 4.6 Design issues for paging systems 4.7 Implementation issues 4.8 Segmentation. Chapter 4.

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'Memory Management' - twyla

Download Now An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
memory management

Memory Management

4.1 Basic memory management

4.2 Swapping

4.3 Virtual memory

4.4 Page replacement algorithms

4.5 Modeling page replacement algorithms

4.6 Design issues for paging systems

4.7 Implementation issues

4.8 Segmentation

Chapter 4

page replacement algorithms
Page Replacement AlgorithmsΑλγόριθμοι αντικατάστασης σελίδων
  • Page fault forces choice
    • which page must be removed
    • make room for incoming page
  • Modified page must first be saved
    • unmodified just overwritten
  • Better not to choose an often used page
    • will probably need to be brought back in soon
  • “Page replacement” problem occurs in:
    • memory caches
    • web pages
optimal page replacement algorithm
Optimal Page Replacement AlgorithmΒέλτιστης αντικατάστασης
  • When a page fault occurs, set of pages in memory
    • Replace page needed at the farthest point in future
  • Optimal but unrealizable (why?)
  • Estimate by:
    • logging page use on previous runs of process – use results on subsequent runs
    • although this is impractical: one program, one specific set of inputs.
not recently used page replacement algorithm
Not Recently Used Page Replacement AlgorithmΑντικατάστασης σελίδας που δε χρησιμοποιήθηκε πρόσφατα
  • Each page has Reference bit, Modified bit
    • bits are set when page is referenced, modified
    • This must be done by hardware, otherwise simulated
  • Pages are classified:
    • Class 0: not referenced, not modified (is it possible?)
    • Class 1: not referenced, modified
    • Class 2: referenced, not modified
    • Class 3: referenced, modified
  • NRU removes page at random
    • from lowest numbered non empty class
    • easy to understand, implement, quite efficient
fifo page replacement algorithm
FIFO Page Replacement AlgorithmΠρώτη μέσα πρώτη εξώ
  • Maintain a linked list of all pages
    • in order they came into memory, at the head of the list is the oldest, at tail the most recent
  • Page at beginning of the list replaced (the oldest)
  • Disadvantage
    • page in memory the longest may be often used
  • Modification?
    • inspect the R bit of the oldest page
      • If R=0, old and unused => replace
      • If R=1, old but used => set R to 0; place it at the end of the list
second chance page replacement algorithm
Second Chance Page Replacement AlgorithmΑλγόριθμος αντικατάστασης δεύτερης ευκαρίας
  • Operation of a second chance
    • pages sorted in FIFO order
    • Page list if fault occurs at time 20, A has R bit set(numbers above pages are loading times)
the clock page replacement algorithm
The Clock Page Replacement AlgorithmΑλγόριθμος ρολογιού
  • Second chance is unnecessarily inefficient
  • Same algorithm, different implementation
least recently used lru
Least Recently Used (LRU)Λιγότερο πρόσφατα χρησιμοποιημένης σελίδας
  • Assume pages used recently will be used again soon
    • throw out page that has been unused for longest time
    • realizable but not cheap
  • Must keep a linked list of pages, sorted by usage
    • most recently used at front, least at rear
    • update this list every memory reference: finding a page in the list, deleting it and move it to the front
  • Hardware approach 1:keep counter in each page table entry
    • equip the hardware with a 64-bit counter, increment at each instr
    • after each mem reference the value of counter is copied to entry
    • choose page with lowest value counter at page fault
    • periodically zero the counter
least recently used lru1
Least Recently Used (LRU)

Hardware approach #2: n page frames, a matrix of n x n bits, initially all 0s. When reference page k, set row k to 1s, then column k to 0.

LRU using a matrix – pages referenced in order 0,1,2,3,2,1,0,3,2,3

simulating lru in software
Simulating LRU in Software
  • Few machines have special hardware => software
  • NFU (Not Frequently Used - μη-συχνά χρησιμοποιημένης ):
    • a counter associated with each page, initally 0
    • at each clock interrupt,the R bit is added to the counter
    • at page fault, page with lowest counter is chosen
    • problem: never forgets anything (e.g. compilation)
  • Aging: modification of NFU (γήρανσης)
    • before adding the R bit, shift right counters 1 bit ( / 2)
    • R bit is added to the leftmost, not the rightmost bit
simulating lru in software1
Simulating LRU in Software
  • The aging algorithm simulates LRU in software
  • Note 6 pages for 5 clock ticks, (a) – (e)
simulating lru in software2
Simulating LRU in Software
  • Aging differs from LRU in 2 ways:
    • Counters are incremented at clock ticks, not during memory references: loose the ability to distinguish references early in the clock interval from those occurring later (e.g. pages 3,5 at (e)).
    • Counters have a finite number of bits (8 in this example). If two counters are 0, we pick one in random, but one page may be referenced 9 ticks ago, the other 1000 ticks ago.
the working set page replacement algorithm
The Working Set Page Replacement AlgorithmΑλγόριθμος συνόλου εργασίας
  • Demand paging: pages are loaded as needed.
  • Locality of reference: a process during a phase of execution references a small fraction of its pages.
  • Working set: pages a process is currently using.
  • If the working set is not in memory => thrashing.
  • Trying to keep track of the working set of each process and load it before running => prepaging.
  • w(k,t): the set of pages at instant t, used by the k most recent page references. w(k,t) is a monoto-nically non decreasing function.
the working set page replacement algorithm1
The Working Set Page Replacement Algorithm
  • The working set is the set of pages used by the k most recent memory references
  • w(k,t) is the size of the working set at time, t
belady s anomaly
Belady's Anomaly
  • FIFO with 3 page frames
  • FIFO with 4 page frames
  • P's show which page references show page faults
modeling page replacement algorithms stack algorithms
Modeling Page Replacement AlgorithmsStack Algorithms
  • Belady’s anomaly led to a theory for paging algos
    • reference string: sequence of memory references as it runs
    • page replacement algorithm
    • the number of page frames, m
design issues for paging systems local versus global allocation policies
Design Issues for Paging SystemsLocal versus Global Allocation Policies
  • Original configuration
  • Local page replacement – fixed size allocated/proc
  • Global page replacement – dynamically allocated
design issues for paging systems local versus global allocation policies1
Design Issues for Paging SystemsLocal versus Global Allocation Policies
  • In general, global algorithms work better
  • Continuously decide how many page frames to allocate – decide based on the working set
  • Algorithm for allocating page frames to processes
    • equal share
    • proportional to processes’ size
    • PFF (page fault frequency) algorithm – measuring by taking the running mean
design issues for paging systems local versus global allocation policies2
Design Issues for Paging SystemsLocal versus Global Allocation Policies

Page fault rate as a function of the number of page frames assigned

design issues for paging systems page size
Design Issues for Paging SystemsPage Size

Small page size

  • Advantages
    • less internal fragmentation (half of the last page)
    • better fit for various data structures, code sections
    • less unused program in memory
  • Disadvantages
    • programs need many pages, larger page tables
    • more transfers, more time to load the page table
design issues for paging systems page size1
Design Issues for Paging SystemsPage Size

page table space

internal fragmentation

Optimized when

  • Overhead due to page table and internal fragmentation
  • Where
    • s = average process size in bytes
    • p = page size in bytes
    • e = page entry
implementation issues operating system involvement with paging
Implementation IssuesOperating System Involvement with Paging

Four times when OS involved with paging:

  • Process creation
    • determine program size, create & init page table
    • space allocated in swap area, pages in and out
    • info about the page table and swap must be stored
  • Process execution
    • MMU reset for new process and TLB flushed
    • new process page table made current
    • prepaging
implementation issues operating system involvement with paging1
Implementation IssuesOperating System Involvement with Paging

Four times when OS involved with paging:

  • Page fault time
    • read registers, determine virtual address causing fault
    • swap target page out, needed page in
    • backup program counter and re-execute
  • Process termination time
    • release page table, pages and swap area
implementation issues page fault handling
Implementation IssuesPage Fault Handling – Χερισμός λαθών σελίδας
  • Hardware traps to kernel, saving PC
  • General registers saved, OS called from assembly
  • OS determines which virtual page needed
  • OS checks validity and protection of address. If OK, it seeks page frame (free or replace one). Otherwise sends a signal or kill to the process
  • If selected frame is dirty, writes it to disk and suspends the process
implementation issues page fault handling1
Implementation IssuesPage Fault Handling
  • OS brings new page in from disk
  • Page tables updated – marked as valid
  • Faulting instruction backed up to when it began
  • Faulting process scheduled and the OS returns to assembly
  • Registers restored and program continues
implementation issues instruction backup
Implementation IssuesInstruction Backup – Αποθήκευση εντολής
  • The instruction causing the fault is stopped part way
  • After page fetched, instruction must be restarted
  • OS must determine where the first byte is

An instruction causing a page fault

implementation issues instruction backup1
Implementation IssuesInstruction Backup
  • OS has difficulties determining start of instruction
  • Autoincrement, autodecrement registers: side effect of executing an instruction
  • Some CPUs: The PC is copied into a register before the instruction is executed
  • If not available, the OS must jump through hoops
implementation issues locking pages in memory
Implementation IssuesLocking Pages in Memory – Κλείδωμα σελίδων στη μνήμη
  • Virtual memory and I/O occasionally interact
  • Proc issues call for read from device into buffer
    • while waiting for I/O, another process starts up
    • has a page fault
    • buffer for the first proc may be chosen to be paged out
  • Need to specify some pages locked
    • exempted from being target pages (pinning)
  • Alternatively, all I/O to kernel buffers
implementation issues backing store
Implementation IssuesBacking Store – Βοηθητική αποθήκευση
  • Special swap area on disk. As new processes are started, space is allocated for them.
  • Keep in the process table the disk address of the swap space. Calculating addresses is easy (offsets)
  • Processes can grow during execution
  • Alternatively, allocate nothing in advance, allocate disk space for a page when swapped out and deallocate it when it is back in. For each page not in memory, there must be a map.
implementation issues backing store1
Implementation IssuesBacking Store

(a) Paging to static swap area

(b) Backing up pages dynamically

Segmentation - Κατάτμηση
  • For many problems, 2 or more virtual address spaces may be better
  • One-dimensional address space with growing tables
  • One table may bump into another
  • Provide the machine with many completely independent address spaces, called segments
  • Different segments have different lengths
  • To specify an address in this 2-d memory, provide the segment number and the address
  • Advantages:
    • linking
    • sharing
    • protection

Comparison of paging and segmentation