Memory management l.jpg
This presentation is the property of its rightful owner.
Sponsored Links
1 / 24

Memory Management PowerPoint PPT Presentation


  • 60 Views
  • Uploaded on
  • Presentation posted in: General

Memory Management. Basic memory management Swapping Virtual memory Page replacement algorithms. Chapter 4. Memory Management. Ideally programmers want memory that is large fast non volatile Memory hierarchy small amount of fast, expensive memory – cache

Download Presentation

Memory Management

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Memory management l.jpg

Memory Management

Basic memory management

Swapping

Virtual memory

Page replacement algorithms

Chapter 4


Memory management2 l.jpg

Memory Management

  • Ideally programmers want memory that is

    • large

    • fast

    • non volatile

  • Memory hierarchy

    • small amount of fast, expensive memory – cache

    • some medium-speed, medium price main memory

    • gigabytes of slow, cheap disk storage

  • Memory manager handles the memory hierarchy


Basic memory management monoprogramming without swapping or paging l.jpg

Basic Memory ManagementMonoprogramming without Swapping or Paging

Three simple ways of organizing memory

- an operating system with one user process


Multiprogramming with fixed partitions l.jpg

Multiprogramming with Fixed Partitions

Disadvantage

Queue for small partition

may be full, but the queue

for a large partition is empty

  • Fixed memory partitions

    • separate input queues for each partition

    • single input queue

  • Disadvantage

  • Wastage of space

  • If largest process is

  • selected, small process

  • Are deprived


Relocation and protection l.jpg

Relocation and Protection

  • Relocation

    • Cannot be sure where program will be loaded in memory

    • suppose 1st instruction of a program jumps at absolute address 200 within the exe file and it is loaded into partition 1 (previous slide) started at 100K. Jump should be performed at 100K+200.

    • This problem is called relocation

  • Relocation Solution

    • As the program is loaded into memory, modify the instructions accordingly.

    • Address locations are added to the base location of the partition to map to the physical address.

  • Protection

    • Must keep a program out of other processes’ partitions

    • Relocation during loading may cause protection problem

  • Protection Solution

    • Address locations larger than limit value is an error


Swapping and virtual memory l.jpg

Swapping and Virtual Memory

  • Sometimes there is not enough main memory to hold all the currently active processes.

  • So excess processes must be kept on disk and brought into run dynamically

  • Two approach

    • Swapping:

      • Bring in each process entirely, running it for a while and then put it back on the disk

    • Virtual memory:

      • Brings each process partially, not entirely


Swapping 1 l.jpg

Swapping (1)

  • Memory allocation changes as

    • processes come into memory and leave memory

  • No concept of fixed partitioning of memory like previous. Number, location and size of partitions vary dynamically.

  • High Utilization of memory, but it complicates allocation and de-allocation of memory


Swapping 2 l.jpg

Swapping (2)

  • Allocating space for growing data segment

  • Allocating space for growing stack & data segment


Memory management for dynamic allocation 3 l.jpg

Memory Management for Dynamic Allocation(3)

  • When memory is assigned dynamically, the OS must manage it. Two ways:

    • With bit maps

    • With linked list


Memory management with bit maps and linked list 4 l.jpg

Memory Management with Bit Maps and Linked List(4)

  • Bit Map

    • 1 = Allocated block

    • 0 = Free block

  • Linked List

    • P = Process allocation

    • H = Hole


Virtual memory 1 l.jpg

Virtual Memory (1)

  • The basic idea behind virtual memory is that the combined size of the program may exceed the amount of physical memory available for it.

  • The OS keeps part of the program currently in use in main memory, and the rest on the disk.

  • For example

    • A 16MB program can run on a 4MB memory

  • Virtual memory also allows many programs in memory at once.

  • Most virtual memory systems use a technique called paging


Paging 2 l.jpg

Paging (2)

The relation between

virtual addresses and

physical memory

addresses given by page

table


Tlbs translation lookaside buffers 3 l.jpg

TLBs – Translation Lookaside Buffers (3)

A TLB to speed up paging


Page replacement algorithms l.jpg

Page Replacement Algorithms

  • Page fault forces choice

    • which page must be removed

    • make room for incoming page

  • Modified page must first be saved

    • unmodified just overwritten

  • Better not to choose an often used page

    • will probably need to be brought back in soon


Optimal page replacement algorithm l.jpg

Optimal Page Replacement Algorithm

  • Replace page needed at the farthest point in future

    • Optimal but unrealizable

  • Estimate by …

    • logging page use on previous runs of process

    • although this is impractical


Not recently used page replacement algorithm l.jpg

Not Recently Used Page Replacement Algorithm

  • Each page has Reference bit, Modified bit

    • bits are set when page is referenced, modified

  • Pages are classified

    • not referenced, not modified

    • not referenced, modified

    • referenced, not modified

    • referenced, modified

  • NRU removes page at random

    • from lowest numbered non empty class


Fifo page replacement algorithm l.jpg

FIFO Page Replacement Algorithm

  • Maintain a linked list of all pages

    • in order they came into memory

  • Page at beginning of list is replaced

  • Disadvantage

    • Highly used page may be a victim


Second chance page replacement algorithm l.jpg

Second Chance Page Replacement Algorithm

  • Operation of a second chance

    • pages sorted in FIFO order

    • Page list if fault occurs at time 20, A has R bit set(numbers above pages are loading times)


The clock page replacement algorithm l.jpg

The Clock Page Replacement Algorithm


Least recently used lru l.jpg

Least Recently Used (LRU)

  • Assume pages used recently will used again soon

    • throw out page that has been unused for longest time

  • Must keep a linked list of pages

    • most recently used at front, least at rear

    • update this list every memory reference !!

  • Alternatively keep counter in each page table entry

    • choose page with lowest value counter

    • periodically zero the counter


Page size 1 l.jpg

Page Size (1)

Small page size

  • Advantages

    • less internal fragmentation

    • less unused program in memory

  • Disadvantages

    • programs need many pages, larger page tables


Page size 2 l.jpg

Page Size (2)

page table space

internal fragmentation

Optimized when

  • Overhead due to page table and internal fragmentation

  • Where

    • s = average process size in bytes

    • p = page size in bytes

    • e = page entry


Page fault handling 1 l.jpg

Page Fault Handling (1)

1. Hardware traps to kernel

2. OS determines which virtual page needed

3. Save registers

4. OS checks validity of address, seeks page frame

5. If selected frame is dirty, write it to disk


Page fault handling 2 l.jpg

Page Fault Handling (2)

6. OS brings new page in memory from disk

7. Page tables updated

8. Faulting instruction backed up to when it began

9. Faulting process scheduled

10. Registers restored

11. Program continues


  • Login