memory management
Download
Skip this Video
Download Presentation
Memory Management

Loading in 2 Seconds...

play fullscreen
1 / 23

Memory Management - PowerPoint PPT Presentation


  • 107 Views
  • Uploaded on

Memory Management. From Chapter 4, Modern Operating Systems, Andrew S. Tanenbaum. Memory Management . Ideally programmers want memory that is large fast non volatile Memory hierarchy small amount of fast, expensive memory – cache some medium-speed, medium price main memory

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Memory Management' - idra


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
memory management

Memory Management

From Chapter 4, Modern Operating Systems, Andrew S. Tanenbaum

memory management1
Memory Management
  • Ideally programmers want memory that is
    • large
    • fast
    • non volatile
  • Memory hierarchy
    • small amount of fast, expensive memory – cache
    • some medium-speed, medium price main memory
    • gigabytes of slow, cheap disk storage
  • Memory managers handle the memory hierarchy
basic memory management monoprogramming without swapping or paging
Basic Memory ManagementMonoprogramming without Swapping or Paging

Three simple ways of organizing memory

- an operating system with one user process

multiprogramming with fixed partitions
Multiprogramming with Fixed Partitions
  • Fixed memory partitions
    • separate input queues for each partition
    • single input queue
relocation and protection
Relocation and Protection
  • Cannot be sure where program will be loaded in memory
    • address locations of variables, code routines cannot be absolute
    • must keep a program out of other processes’ partitions
  • Use base and limit values
    • address locations added to base value to map to physical addr
    • address locations larger than limit value is an error
  • Self-relocation
    • Program computes its own references
swapping 1
Swapping (1)

Memory allocation changes as

  • processes come into memory
  • leave memory

Shaded regions are unused memory(globalfragmentation)

swapping 2
Swapping (2)
  • Allocating space for growing data segment
  • Allocating space for growing stack & data segment
virtual memory paging 1
Virtual MemoryPaging (1)

The position and function of the MMU

paging 2
Paging (2)

The relation betweenvirtual addressesand physical memory addresses given bypage table

Frame#

page tables 1
Page Tables (1)

Internal operation of MMU with 16 4 KB pages

page tables 2
Page Tables (2)

Second-level page tables

  • 32 bit address with 2 page table fields
  • Two-level page tables

Top-level

page table

page tables 3
Page Tables (3)

Typical page table entry

inverted page tables
Inverted Page Tables

Comparison of a traditional page table with an inverted page table

page replacement algorithms 1
Page Replacement Algorithms (1)
  • Page fault forces choice
    • which page must be removed
    • make room for incoming page
  • Modified page must first be saved
    • unmodified just overwritten
  • Better not to choose an often used page
    • will probably need to be brought back in soon
page replacement algorithms 2
Optimal Page Replacement

Replace page needed at the farthest point in future

Optimal but unrealizable

Not Recently Used (NRU)

FIFO

Second Chance

Clock

Least Recently Used (LRU)

Not Frequently Used (NFU)

Aging

Working Set

WSClock

Page Replacement Algorithms (2)
design issues for paging systems local versus global allocation policies
Design Issues for Paging SystemsLocal versus Global Allocation Policies
  • Original configuration
  • Global page replacement
  • Local page replacement
cleaning policy garbage collection
Cleaning Policy (Garbage Collection)
  • Need for a background process, paging daemon
    • periodically inspects state of memory
  • When too few page frames are free
    • selects pages to evict using a replacement algorithm
load control
Load Control
  • Despite good designs, system may still thrash when
    • some processes need more memory
    • but no processes need less
  • Solution :Reduce number of processes competing for memory
    • swap one or more to disk, divide up pages they held
    • reconsider degree of multiprogramming
page size
Page Size

Small page size

  • Advantages
    • less internal fragmentation
    • better fit for various data structures, code sections
  • Disadvantages
    • program needs more pages has larger page table
separate instruction and data spaces
Separate Instruction and Data Spaces
  • One address space
  • Separate I and D spaces
shared pages
Shared Pages

Two processes sharing same program sharing its page table

references
References
  • Chapters 8 and 9 :OS Concepts, Silberschatz, Galvin, Gagne
  • Chapter 4: Modern Operating Systems, Andrew S. Tanenbaum
  • X86 architecture
    • http://en.wikipedia.org/wiki/Memory_segment
  • Memory segment
    • http://en.wikipedia.org/wiki/X86
  • Memory model
    • http://en.wikipedia.org/wiki/Memory_model
  • IA-32 Intel Architecture Software Developer’s Manual, Volume 1: Basic Architecture
    • http://www.intel.com/design/pentium4/manuals/index_new.htm
ad