Cmpt 300 operating system i
This presentation is the property of its rightful owner.
Sponsored Links
1 / 43

CMPT 300 Operating System I PowerPoint PPT Presentation


  • 91 Views
  • Uploaded on
  • Presentation posted in: General

CMPT 300 Operating System I. Chapter 4 Memory Management. Why Memory Management?. Why money management? Not enough money. Same thing for memory Parkinson’s law: programs expand to fill the memory available to hold them “640KB memory are enough for everyone” – Bill Gates

Download Presentation

CMPT 300 Operating System I

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Cmpt 300 operating system i

CMPT 300 Operating System I

Chapter 4Memory Management


Why memory management

Why Memory Management?

  • Why money management?

    • Not enough money. Same thing for memory

  • Parkinson’s law: programs expand to fill the memory available to hold them

    • “640KB memory are enough for everyone” – Bill Gates

  • Programmers’ ideal: an infinitely large, infinitely fast memory, nonvolatile

  • Reality: memory hierarchy

Registers

Cache

Main memory

Magnetic disk

Magnetic tape


What is memory management

What Is Memory Management?

  • Memory manager: the part of the OS managing the memory hierarchy

    • Keep track of memory parts in use/not in use

    • Allocate/de-allocate memory to processes

    • Manage swapping between main memory and disk

  • Basic memory management: every program is put and run in main memory as whole

  • Swapping & paging: move processes back and forth between main memory and disk


Outline

Outline

  • Basic memory management

  • Swapping

  • Virtual memory

  • Page replacement algorithms

  • Modeling page replacement algorithms

  • Design issues for paging systems

  • Implementation issues

  • Segmentation


Mono programming

Mono Programming

  • One program at a time

    • Share memory with OS

    • OS loads the program from disk to memory

  • Three variations

0xFFF…

0

0

0


Multiprogramming with fixed partitions

Multiprogramming With Fixed Partitions

  • Advantages of Multiprogramming?

  • Scenario: multiple programs at a time

    • Problem: how to allocate memory?

  • Divide memory up into n partitions, one partition can at most hold one program (process)

    • Equal partitions vs. unequal partitions

    • Each partition has a job queue

    • Can be done manually when system is up

  • A job arrives, put it into the input queue for the smallest partition large enough to hold it

    • Any space in a partition not used by a job is lost


Example multiprogramming with fixed partitions

Example: Multiprogramming With Fixed Partitions

800K

700K

B

400K

A

200K

100K

0

Multiple input queues


Single input queue

Disadvantage of multiple input queues

Small jobs may wait, while a queue with larger memory is empty

Solution: single input queue

Single Input Queue

800K

700K

250K

B

400K

A

10K

200K

100K

0


How to pick jobs

How to Pick Jobs?

  • Pick the first job in the queue fitting an empty partition

    • Fast, but may waste a large partition on a small job

  • Pick the largest job fitting an empty partition

    • Memory efficient

    • Smallest jobs may be interactive ones, need best service, slow

  • Policies for efficiency and fairness

    • Have at least one small partition around

    • A job may not be skipped more than k times


A na ve model for multiprogramming

A Naïve Model for Multiprogramming

  • Goal: determine the number of processes in main memory to keep the CPU busy

    • Multiprogramming improves CPU utilization

  • If on average, a process computes 20% of the time it sitting in memory  5 processes can keep CPU busy all the time

  • Assume all processes never wait for I/O at the same time.

    • Too optimistic!


A probabilistic model

A process spends a fraction p of its time waiting for I/O to complete

0<p<1

At once n processes in memory

CPU utilization 1 – pn

Probability that all n processes are waiting for I/O: pn

Assume processes are independent to each other

Not true in reality. A process has to wait another process to give up CPU

Using queue theory.

A Probabilistic Model


Cpu utilization

CPU Utilization

  • 1 – pn


Memory management for multiprogramming

Memory Management for Multiprogramming

  • Relocation

    • When program is compiled, it assumes the starting address is 0. (logical address)

    • When it is loaded into memory, it could start at any address. (physical address)

    • How to map logical address to physical address?

  • Protection

    • A program’s access should be confined to proper area


Relocation protection

Relocation & Protection

  • Logical address for programming

    • Call a procedure at logical address 100

  • Physical address

    • When the procedure is in partition 1 (started from physical address 100k), then the procedure is at 100K+100

  • Relocation problem: translation between logical address and physical address

  • Protection: a malicious program can jump to space belonging to other users

    • Generate a new instruction on the fly that can reads or writes any word in memory


Relocation protection using registers

Relocation/Protection Using Registers

  • Base register: start of the partition

    • Every memory address generated adds the content of base register

    • Base register: 100K, CALL 100  CALL 100K +100

  • Limit register: length of the partition

    • Addresses are checked against the limit register

  • Disadvantage: perform addition and comparison on every memory reference


Outline1

Outline

  • Basic memory management

  • Swapping

  • Virtual memory

  • Page replacement algorithms

  • Modeling page replacement algorithms

  • Design issues for paging systems

  • Implementation issues

  • Segmentation


In time sharing interactive systems

In Time-sharing/Interactive Systems…

  • Not enough main memory to hold all currently active processes

    • Intuition: excess processes must be kept on disk and brought in to run dynamically

  • Swapping: bring in each process in entirely

    • Assumption: each process can be held in main memory, but cannot finish at one run

  • Virtual memory: allow programs to run even when they are only partially in main memory

    • No assumption about program size


Swapping

Swapping

Hole

Time 

Swap A out

Swap B out


Swapping v s fixed partitions

Swapping V.S. Fixed Partitions

  • The number, location and size of partitions vary dynamically in swapping

  • Flexibility, improve memory utilization

  • Complicate allocating, de-allocating and keeping track of memory

  • Memory compaction: combine “holes” in memory into a big one

    • More efficient in allocation

    • Require a lot of CPU time

    • Rarely used in real systems


Enlarge memory for a process

Enlarge Memory for a Process

  • Fixed size process: easy

  • Growing process

    • Expand to the adjacent hole, if there is a hole

    • Otherwise, wait or swap some processes out to create a large enough hole

    • If swap area on the disk is full, wait or be killed

  • Allocate extra space whenever a process is swapped in or move


Handling growing processes

Handling Growing Processes

Processes with one growing data segment

Processes with growing data and stack segments


Memory management with bitmaps

Memory Management With Bitmaps

  • Two ways to keep track of memory usage

    • Bitmaps and free lists

  • Bitmaps

    • Memory is divided into allocation units

    • One bit per unit: 0-free, 1-occupied


Size of allocation units

Size of Allocation Units

  • 4 bytes/unit  1 bit in map for 32 bits of memory  bitmap takes 1/33 of memory

  • Trade-off between allocation unit and memory utilization

    • Smaller allocation unit  larger bitmap

    • Larger allocation unit  smaller bitmap

    • On average, half of the last unit is wasted

  • When bring a k unit process into memory

    • Need find a hole of k units

    • Search for k consecutive 0 bits in the entire map


Memory management with linked lists

Memory Management With Linked Lists

  • Two types of entries: hole(H)/process(P)

Address: 20

Length 6

List is kept sorted by address.

Starts at 20

Process


Updating linked lists

Updating Linked Lists

  • Combine holes if possible

    • Not necessary for bitmap

Before process X terminates

After process X terminates


Allocate memory for new processes

Allocate Memory for New Processes

  • First fit: find the first hole fitting requirement

    • Break the hole into two pieces: P + smaller H

  • Next fit: start search from the place of last fit

    • Empirical evidence: Slightly worse performance than first fit

  • Best fit: take the smallest hole that is adequate

    • Slower

    • Generate tiny useless holes

  • Worst fit: always take the largest hole


Using distinct lists

Using Distinct Lists

  • Distinct lists for processes and holes

    • List of holes can be sorted on size

      • Best fit becomes faster

    • Problem: how to free a process?

      • Merging holes is very costly

  • Quick fit: grouping holes based on size

    • Different lists for different sizes

    • E.g., List 1 for 4KB holes, List 2 for 8KB holes.

      • How about a 5KB hole?

    • Speed up the searching

    • Merging holes is still costly


Outline2

Outline

  • Basic memory management

  • Swapping

  • Virtual memory

  • Page replacement algorithms

  • Modeling page replacement algorithms

  • Design issues for paging systems

  • Implementation issues

  • Segmentation


Why virtual memory

Why Virtual Memory?

  • If the program is too big to fit in memory …

    • Split the program into pieces – overlays

    • Swapping overlays in and out

    • Problem: programmer does the work of splitting the program into pieces.

  • Virtual memory: OS takes care of everything

    • Size of program could be larger than the physical memory available.

    • Keep the parts currently used in memory

    • Put other parts on disk


Virtual and physical addresses

CPU package

Disk controller

CPU

MMU

Memory

Bus

Virtual and Physical Addresses

  • Virtual addresses (VA) are used/generated by programs

    • Each process has its own VA.

    • E.g, MOV REG, 1000 ;1000 is VA

  • Physical addresses (PA) are used in execution

  • MMU: maps VA to PA


Paging

Virtual address space is divided into pages

Memories are allocated in the unit of page

Page frames in physical memory

Pages and page frames are always the same size

Usually, from 512B to 64KB

#Pages > #Page frames

On a 32-bit PC, VA could be as large as 4GB, but PA < 1GB

In hardware, a present/absent bit keeps track of which pages are physically present in memory.

Page fault: an unmapped page is requested

OS picks up a little-used page frame and write its content back to hard disk

Fetch the wanted page into the page frame just freed

Paging


Cmpt 300 operating system i

Page 0: 0—4095

VA: 0 page 0 page frame 2 PA: 8192

0—4095 8192--12287

VA: 8192 page 2 page frame 6 PA: 24567

VA: 8199 page 2, offset 7 page frame 6, offset 7 PA: 24567+7=24574

VA:32789 page 8 unmapped page fault

Paging: An Example

Pages

Page frames

8

2

1

0


The magic in mmu

The Magic in MMU


Page table

Page Table

  • Map virtual pages onto page frames

    • VA is split into page number and offset.

    • Each page number has one entry in page table.

  • Page table can be extremely large

    • 32 bits virtual addresses, 4kb/page 1M pages. How about 64 bits VA?

    • Each process needs its own page table


Typical page table entry

Typical Page Table Entry

  • Entry size: usually 32 bits

  • Page frame number: goal of page mapping

  • Present/absent bit: page in memory?

  • Protection: what kinds of access permitted

  • Modified: Has the page been written? (If so, need to write back to disk later) Dirty bit

  • Referenced: Has the page been referenced?

  • Caching disable: read from the disk?

Caching disabled

Modified

Present/absent

Referenced

Protection


Fast mapping

Fast Mapping

  • Virtual to physical mapping must be fast

    • several page table references/instruction

    • Unacceptable to store the entire page table in main memory

    • Have to seek for hardware solutions


Two simple designs for page table

Two Simple Designs for Page Table

  • Use fast hardware registers for page table

    • Single physical page table in MMU: an array of fast registers: one entry for each virtual page

    • Requires no memory reference during mapping

    • Load registers at every process switching

    • Expensive if the page table is large

      • Cost of hardware and overhead of context switching

  • Put the whole table in main memory

    • Only one register pointing to the start of table

    • Fast switching

    • Several memory references/instruction

  • Pure memory solution is slow, pure register solution is expensive, so …


Translation lookaside buffers tlbs

Translation Lookaside Buffers (TLBs)

  • Observation: Most programs tend to make a large number of references to a small number of pages

    • Put the heavily read fraction in registers

    • TLB/associative memory

check

Page table

TLB

Virtual address

found

Not found

Physical address


Outline3

Outline

  • Basic memory management

  • Swapping

  • Virtual memory

  • Page replacement algorithms

  • Modeling page replacement algorithms

  • Design issues for paging systems

  • Implementation issues

  • Segmentation


Page replacement

Page Replacement

  • When a page fault occurs, and all page frames are full

    • Choose one page to remove, if modified (called dirty page), update its disk copy

    • Better choose an unmodified page

    • Better choose a rarely used page

  • Many similar problems in computer systems

    • Memory cache page replacement

    • Web page cache replacement in web server

  • Revisit: page table entry


Typical page table entry1

Typical Page Table Entry

  • Entry size: usually 32 bits

  • Page frame number: goal of page mapping

  • Present/absent bit: page in memory?

  • Protection: what kinds of access permitted

  • Modified: Has the page been written? (If so, need to write back to disk later) Dirty bit

  • Referenced: Has the page been referenced?

  • Caching disable: read from the disk?

Caching disabled

Modified

Present/absent

Referenced

Protection


Optimal algorithm

Optimal Algorithm

  • Label each page in the main memory with number of instructions will be executed before next reference

    • E.g, a page labeled by “1” means this page will be referenced by the next instruction.

  • Remove the page with highest label

    • Put off page faults as long as possible

  • Unrealizable!

    • Why? SJF process scheduling, Banker’s Algorithm for deadlock avoidance

    • Could be used as a benchmark


Remove not recently used pages

Remove Not Recently Used Pages

  • R and M are initially 0

    • Set R when a page is referenced

    • Set M when a page is modified

    • Done by hardware

  • Clear R bit periodically by software (OS)

  • Four classes of pages when a page fault

    • Class 0 (R0M0): not referenced, not modified

    • Class 1 (R0M1): not referenced, modified

    • Class 2 (R1M0): referenced, not modified

    • Class 3 (R1M1): referenced, modified

  • NRU removes a page at random from the lowest numbered nonempty class


  • Login