this lecture n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
This lecture… PowerPoint Presentation
Download Presentation
This lecture…

Loading in 2 Seconds...

play fullscreen
1 / 31

This lecture… - PowerPoint PPT Presentation


  • 121 Views
  • Uploaded on

This lecture…. Virtual memory Demand paging Page replacement. Demand Paging. Virtual memory – separation of user logical memory from physical memory. Up to now, all of a job’s virtual address space must be in physical memory. But programs don’t use all of their memory all of the time.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'This lecture…' - kendra


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
this lecture
This lecture…
  • Virtual memory
    • Demand paging
    • Page replacement
demand paging
Demand Paging
  • Virtual memory – separation of user logical memory from physical memory.
  • Up to now, all of a job’s virtual address space must be in physical memory.
  • But programs don’t use all of their memory all of the time.
  • In fact, there is a 90-10 rule: programs spend 90% of their time in 10% of their code.
  • Instead, use main memory as a cache for disk. Some pages in memory, some on disk.
virtual memory that is larger than physical memory
Virtual Memory That is Larger Than Physical Memory
  • Benefits:
    • Bigger virtual address space: illusion of infinite memory
    • Allow more programs than will fit in memory, to be running at same time
demand paging1
Demand Paging
  • Bring a page into memory from the disk only when it is needed
    • this process is called demand paging
  • Extend page table entries with extra bit “present” (valid)
    • if page in memory? present = 1, on disk, present = 0
    • translations on entries with present = 1 work as before
    • if present = 0, then translation causes a page fault.
demand paging mechanism
Demand Paging Mechanism
  • Page table has “present” (valid) bit
    • If present, pointer to page frame in memory
    • If not present, go to disk
  • Hardware traps to OS on reference to invalid page
    • (In MIPS/Nachos, trap on TLB miss, OS checks page table valid bit)
  • OS software:
    • Check an internal table; Invalid ref.  abort; Just not in memory
    • Find a free frame or evict one (which one?)
    • If evicted page modified (dirty), write it back to disk
    • Change its page table entry and invalidate TLB entry
    • Load new page into memory from disk
    • Update page table entry
    • Continue thread
  • All this is transparent: OS just runs another job in the meantime.
software loaded tlb
Software-loaded TLB
  • Instead of having the hardware load the TLB when a translation doesn’t match, the MIPS/Snake/Nachos TLB is software loaded.
  • Idea is, if have high TLB hit rate, ok to trap to software to fill TLB, even if it’s a bit slower.
  • Database server, File server, Web server, General computing
  • How do we implement this? How can a process run without access to a page table?
software loaded tlb1
Software-loaded TLB
  • Basic mechanism (just generalization of earlier):
  • TLB has “present” (valid) bit
    • if present, pointer to page frame in memory
    • if not present, use software page table
  • Hardware traps to OS on reference not in TLB
  • OS software:
    • check if page is in memory
    • if yes, load page table entry into TLB - Intelligence
    • if no, perform page fault operations outlined earlier
    • continue thread
  • Paging to disk, or even having software load the TLB – all this is transparent – job doesn’t know it happened.
why does this work
Why does this work?
  • Locality!
    • Temporal locality: will reference same locations as accessed in the recent past
    • spatial locality: will reference locations near those accessed in the recent past
  • Locality means paging can be infrequent
    • once you’ve paged something in, it will be used many times
    • on average, you use things that are paged in
    • but, this depends on many things:
      • degree of locality in application
      • page replacement policy and application reference pattern
      • amount of physical memory and application footprint
transparent page faults
Transparent page faults
  • Need to transparently re-start faulting instruction.
  • Hardware must help out, by saving:
  • Faulting instruction (need to know which instruction caused fault)
  • Processor state
  • What if an instruction has side effects (CISC processors)?

mov (sp)+,10

  • Two options:
    • Unwind side-effects
    • Finish off side-effects
transparent page faults1
Transparent page faults
  • Are RISCs easier? What about delayed loads? Delayed branches?

ld (sp), r1

  • What if next instruction causes page fault, while load is still in progress?
  • Have to save enough state to allow CPU to restart.
transparent page faults2
Transparent page faults
  • Lots of hardware designers don’t think about virtual memory.
  • For example: block transfer instruction.
  • Source, destination can be overlapping (destination before source).
  • Overwrite part of source as instruction proceeds.
  • No way to unwind instruction!
  • IBM, VAX –
    • Run instruction twice
    • First time – read only;
      • Service page fault
      • pin page
    • Second time – real r/w

dest begin

Source begin

dest end

Source end

performance of demand paging
Performance of Demand Paging
  • Memory access time for most computers range from 10 to 200 nanoseconds
  • Page Fault probability 0  p  1.0
    • if p = 0 no page faults
    • if p = 1, every reference is a fault
  • Effective Access Time (EAT)

EAT = (1 – p) x memory access

+ p (page fault overhead

+ [swap page out ]

+ swap page in

+ restart overhead)

demand paging example
Demand Paging Example
  • Memory access time = 1 microsecond
  • 50% of the time the page that is being replaced has been modified and therefore needs to be swapped out.
  • Swap Page Time = 10 msec = 10,000 microsec

EAT = (1 – p) x 1 + p (15000)

1 + 15000P (in microsec)

demand paging example1
Demand Paging Example
  • Memory access time = 100 nanoseconds
  • Page fault service time = 25 milliseconds
  • Effective access time = (1-p) x 100 + p x (25 milliseconds)

= (1-p) x 100 + p x 25,000,000

= 100 + 24,999,900 x p

  • Effective access time is directly proportional to the page-fault rate
  • Condition for less than 10-percent degradation

110 > 100 + 25,000,00 x p

10 > 25,000,000 x p

p < 0.0000004

    • i.e., less than 1 memory access out of 2,500,000 page fault
cool paging tricks
Cool Paging Tricks
  • Virtual memory allows other benefits during process creation:
    • Copy-on-Write (COW), e.g. on fork( )
      • allows both parent and child processes to initially share the same pages in memory.
      • If either process modifies a shared page, only then is the page copied.
      • COW allows more efficient process creation as only modified pages are copied.
      • Free pages are allocated from a pool of zeroed-out pages.
another great trick
Another great trick
  • Memory-Mapped Files
    • instead of using open(), read(), write(), close()
      • “map” a file into a region of the virtual address space
        • e.g., into region with base ‘X’
      • accessing virtual address ‘X+N’ refers to offset ‘N’ in file
      • initially, all pages in mapped region marked as invalid
    • OS reads a page from file whenever invalid page accessed
    • OS writes a page to file when evicted from physical memory
      • only necessary if page is dirty
    • Share a file by mapping it into the virtual address space of each process
page replacement policies
Page replacement policies
  • Replacement policy is an issue with any caching system.
  • The goal of the page replacement algorithm:
    • reduce fault rate by selecting best victim page to remove
    • the best page to evict is one that will never be touched again
      • as process will never again fault on it
  • Evaluate algorithm by running it on a particular string of memory references (reference string) and computing the number of page faults on that string.
first in first out fifo
First-In-First-Out (FIFO)
  • Throw out oldest page. Fair: all pages get = residency
  • Bad, because throws out heavily used pages instead of those that are not frequently used.
  • Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

3 frames4 frames

  • FIFO Replacement – Belady’s Anomaly
    • more frames do not imply less page faults

1

1

4

5

1

1

4

5

10 page faults

9 page faults

2

2

1

5

2

3

2

1

3

3

4

3

2

3

2

4

3

4

fifo page replacement
FIFO Page Replacement
  • With FIFO, contents of memory can be completely different with different number of page frames.
optimal min
Optimal (Min)
  • Lowest page-fault rate of all algorithms; called OPT or MIN.
  • Replace page that will not be used for longest period of time.
  • 4 frames example

1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

  • Difficult to implement as it requires future knowledge of the reference string.
  • Used for measuring how well your algorithm performs.

1

4

2

6 page faults

3

4

5

least recently used lru
Least Recently Used (LRU)
  • Replace the page that has not been used for the longest period of time.
  • Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
  • This is the optimal page-replacement algorithm looking backward in time, rather than forward.

1

5

2

8 page faults

3

5

4

4

3

implementing perfect lru
Implementing Perfect LRU
  • Timestamp page on each reference
    • At eviction time scan for oldest
    • A clock counter can be kept for each page entry
  • Problems:
    • large page lists
    • no hardware support for time stamps
implementing perfect lru1
Implementing Perfect LRU
  • Keep list of pages ordered by time of reference
    • Keep a stack of page numbers in a double link form
    • Page referenced:
      • move it to the top of stack
      • bottom is the LRU page
    • requires 6 pointers to be changed
    • No search for replacement