Memory Management - PowerPoint PPT Presentation

Memory management
1 / 23

  • Uploaded on
  • Presentation posted in: General

Memory Management. From Chapter 4, Modern Operating Systems, Andrew S. Tanenbaum. Memory Management. Ideally programmers want memory that is large fast non volatile Memory hierarchy small amount of fast, expensive memory – cache some medium-speed, medium price main memory

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.

Download Presentation

Memory Management

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

Memory management

Memory Management

From Chapter 4, Modern Operating Systems, Andrew S. Tanenbaum

Memory management1

Memory Management

  • Ideally programmers want memory that is

    • large

    • fast

    • non volatile

  • Memory hierarchy

    • small amount of fast, expensive memory – cache

    • some medium-speed, medium price main memory

    • gigabytes of slow, cheap disk storage

  • Memory managers handle the memory hierarchy

Basic memory management monoprogramming without swapping or paging

Basic Memory ManagementMonoprogramming without Swapping or Paging

Three simple ways of organizing memory

- an operating system with one user process

Multiprogramming with fixed partitions

Multiprogramming with Fixed Partitions

  • Fixed memory partitions

    • separate input queues for each partition

    • single input queue

Relocation and protection

Relocation and Protection

  • Cannot be sure where program will be loaded in memory

    • address locations of variables, code routines cannot be absolute

    • must keep a program out of other processes’ partitions

  • Use base and limit values

    • address locations added to base value to map to physical addr

    • address locations larger than limit value is an error

  • Self-relocation

    • Program computes its own references

Swapping 1

Swapping (1)

Memory allocation changes as

  • processes come into memory

  • leave memory

    Shaded regions are unused memory(globalfragmentation)

Swapping 2

Swapping (2)

  • Allocating space for growing data segment

  • Allocating space for growing stack & data segment

Virtual memory paging 1

Virtual MemoryPaging (1)

The position and function of the MMU

Paging 2

Paging (2)

The relation betweenvirtual addressesand physical memory addresses given bypage table


Page tables 1

Page Tables (1)

Internal operation of MMU with 16 4 KB pages

Page tables 2

Page Tables (2)

Second-level page tables

  • 32 bit address with 2 page table fields

  • Two-level page tables


page table

Page tables 3

Page Tables (3)

Typical page table entry

Tlbs translation lookaside buffers

TLBs – Translation Lookaside Buffers

A TLB to speed up paging

Inverted page tables

Inverted Page Tables

Comparison of a traditional page table with an inverted page table

Page replacement algorithms 1

Page Replacement Algorithms (1)

  • Page fault forces choice

    • which page must be removed

    • make room for incoming page

  • Modified page must first be saved

    • unmodified just overwritten

  • Better not to choose an often used page

    • will probably need to be brought back in soon

Page replacement algorithms 2

Optimal Page Replacement

Replace page needed at the farthest point in future

Optimal but unrealizable

Not Recently Used (NRU)


Second Chance


Least Recently Used (LRU)

Not Frequently Used (NFU)


Working Set


Page Replacement Algorithms (2)

Design issues for paging systems local versus global allocation policies

Design Issues for Paging SystemsLocal versus Global Allocation Policies

  • Original configuration

  • Global page replacement

  • Local page replacement

Cleaning policy garbage collection

Cleaning Policy (Garbage Collection)

  • Need for a background process, paging daemon

    • periodically inspects state of memory

  • When too few page frames are free

    • selects pages to evict using a replacement algorithm

Load control

Load Control

  • Despite good designs, system may still thrash when

    • some processes need more memory

    • but no processes need less

  • Solution :Reduce number of processes competing for memory

    • swap one or more to disk, divide up pages they held

    • reconsider degree of multiprogramming

Page size

Page Size

Small page size

  • Advantages

    • less internal fragmentation

    • better fit for various data structures, code sections

  • Disadvantages

    • program needs more pages has larger page table

Separate instruction and data spaces

Separate Instruction and Data Spaces

  • One address space

  • Separate I and D spaces

Shared pages

Shared Pages

Two processes sharing same program sharing its page table



  • Chapters 8 and 9 :OS Concepts, Silberschatz, Galvin, Gagne

  • Chapter 4: Modern Operating Systems, Andrew S. Tanenbaum

  • X86 architecture


  • Memory segment


  • Memory model


  • IA-32 Intel Architecture Software Developer’s Manual, Volume 1: Basic Architecture


  • Login