1 / 66

CHAPTER 9 - MEMORY MANAGEMENT

CHAPTER 9 - MEMORY MANAGEMENT. CGS 3763 - Operating System Concepts UCF, Spring 2004. OVERVIEW (Part 1). Address Binding Logical vs. Physical Address Space Early Techniques Dynamic Loading Overlays Swapping Contiguous Allocation Methods Partitions Fragmentation Dynamic Linking .

sibley
Download Presentation

CHAPTER 9 - MEMORY MANAGEMENT

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CHAPTER 9 - MEMORY MANAGEMENT CGS 3763 - Operating System Concepts UCF, Spring 2004

  2. OVERVIEW (Part 1) • Address Binding • Logical vs. Physical Address Space • Early Techniques • Dynamic Loading • Overlays • Swapping • Contiguous Allocation Methods • Partitions • Fragmentation • Dynamic Linking

  3. OVERVIEW (Part 2) • Paging • Page Tables • Frame Tables • Segmentation • Segment Tables • Segmentation with Paging

  4. BACKGROUND • Program must be brought into memory and turned into a process for it to be executed. • New or Job Queue contains collection of jobs on disk that are waiting to be brought into memory for execution. • Single Program Systems • User memory consists of a single partition • Task was to make the most use of that limited space • Multiprogramming Systems • Must keep multiple programs in memory simultaneously • Task is to manage & share this limited space among multiple programs

  5. ADDRESS BINDING • In order to load a program, instructions and associated data must be mapped or “bound” to specific locations in memory • Address binding can happen at three different stages. • Compile time: • If memory location known a priori, absolute code can be generated; must recompile code if starting location changes. • Load time: • Must generate relocatable code if memory location is not known at compile time. • Execution or Run time: • Binding delayed until run time if the process can be moved during its execution from one memory segment to another. Special hardware (e.g., MMUs) required for address mapping.

  6. ADDRESS BINDING TIMES • User programs go through several steps before being executed.

  7. SINGLE PROGRAM, COMPILE TIME BINDING

  8. SINGLE PROGRAM, LOAD TIME BINDING

  9. CONTIGUOUS ALLOCATION, SINGLE PARTITION SYSTEMS • Simplest form of memory management to implement • Characteristic of early computer systems • Contiguous: All portions of memory associated with a process are kept together - not broken into parts • Single Partition: User area consists of one large address space, all of which is allocated to a single user process • Hardware support required: • Fence Register, Mode Bit • Assuming fixed OS size, can bind at compile time • Assuming variable OS size, must bind at load time

  10. PROBLEMS WITH SINGLE PARTITIONS • Lots of wasted space (varies by program size) • Generally low memory utilization • Poor CPU & I/O Utilization (not multiprogramming) • Program size limited to size of memory • Can use overlays and dynamic loading to obtain better performance (squeeze larger program into smaller space) • Dynamic linking possible but not really applicable • Not generally used with multiprogramming • If multiprogramming, must swap programs in and out of memory • High overhead if overlays or swapping used.

  11. DYNAMIC LOADING • Program divided into subroutines or procedures • Subroutine not loaded until it is called • Better memory-space utilization; unused routines are never loaded. • Useful when large amounts of code are needed to handle infrequently occurring cases. • Example: Payroll varied by days in month

  12. OVERLAYS • Keep in memory only those instructions and data that are needed at any given time. • Needed when process is larger than amount of memory allocated to it. • Examples: 2-Pass Assembler, Multi-Pass Compiler

  13. SWAPPING • A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution. • Backing store – fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images. • Major part of swap time is transfer time; • total transfer time is directly proportional to the amount of memory swapped. • Modified versions of swapping are found on many systems

  14. SWAPPING CONTINUED

  15. CONTIGUOUS ALLOCATION,MULTIPLE PARTITION SYSTEMS • Supports true multiprogramming • More than one program resident in memory at same time • Each program / process assigned a partition in memory • Partitions may be of different sizes • Sizes may be fixed at system start up or vary over time • Additional hardware support required • Base and Limit Registers • Partition registers (fixed partitions) • OS must keep track of each partition in some type of table: • e.g., Partition #, location, size, status • Difficult to bind at compile time - don’t always know which partition will be assigned

  16. MULTIPLE FIXED PARTITIONS, COMPILE TIME BINDING

  17. MULTIPLE FIXED PARTITIONS,LOAD TIME BINDING

  18. DYNAMIC LINKING • With static linking, all called routines linked into single large program before loading. • Linking postponed until execution time. • Small piece of code, stub, used to locate the appropriate memory-resident library routine. • Stub replaces itself with the address of the routine, and executes the routine. • Operating system needed to check if routine is in processes’ memory address. • Particularly useful for libraries

  19. FIXED PARTITION ISSUES • Can use overlays or dynamic loading within a single partition to improve performance • Can use dynamic linking to share libraries or common routines among resident processes • Swapping possible but not required for multiprogramming. May use for controlling process mix. • May establish separate job queues for each partition based on memory requirements • To bind at compile time, must know the partition to be assigned before hand • Size of OS can vary by changing size/location of lowest partition

  20. FIXED PARTITIONS (cont.) • Advantages: • Allows multiprogramming • Compared with single partition systems - • Less wasted space • Better memory utilization • Better processor and I/O utilization • Context Switching faster than Swapping • Disadvantages: • Internal Fragmentation (unused memory inside partitions) • Number of concurrent processes limited by number of partitions • Program limited to size of largest partition

  21. MULTIPLE DYNAMIC PARTITIONS • Multiple-partition allocation • Hole – block of available memory; holes of various size are scattered throughout memory. • When a process arrives, it is allocated memory from a hole large enough to accommodate it. • Operating system maintains information about: • allocated partitions • free partitions (hole) OS OS OS OS process 5 process 5 process 5 process 5 process 9 process 9 process 8 process 10 process 2 process 2 process 2 process 2

  22. MULTIPLE DYNAMIC PARTITIONS,LOAD TIME BINDING

  23. DYNAMIC PARTITION ISSUES • Memory allocated dynamically with each job getting a partition of size >= its program length. • Can’t bind at compile time. Partition locations always changing over time. Only load or run time binding. • Can’t bind at load time if you want to compact. • Can use overlays or dynamic loading within a single partition to improve performance • Can use dynamic linking to share libraries or common routines among resident processes • Swapping used primarily as tool for compaction (roll-in, roll-out). May also used for controlling process mix. • Size of OS can vary by relocating lowest partition higher in memory • How to select hole/partition for next job?

  24. DYNAMIC STORAGE ALLOCATION PROBLEM • How to satisfy a request of size n from a list of free holes. • First-fit: Allocate the first hole that is big enough. • Best-fit: Allocate the smallest hole that is big enough • Must search entire list/table, unless ordered by size • Produces the smallest leftover hole. • Worst-fit: Allocate the largest hole • Must also search entire list/table. • Produces the largest leftover hole. • First-fit and best-fit better than worst-fit in terms of speed and storage utilization.

  25. DYNAMIC PARTITIONS (cont.) • Advantages: • Allows multiprogramming • Compared with multiple fixed partition systems - • Less wasted space assuming compaction • Better memory utilization • Higher degree of multiprogramming possible (more partitions) • Better processor and I/O utilization • Larger programs possible (can be as large as user area) • Disadvantages: • External Fragmentation can occur • Number of concurrent processes is • unpredictable and • limited an a given time by number of partitions but can vary • More overhead than fixed partitions, more complicated

  26. FRAGMENTATION REVIEW • Internal fragmentation – allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used. • Reduce internal fragmentation by • using dynamic partitions or • assigning smallest useable fixed partition to a job • External Fragmentation – enough memory space exists to satisfy a job request, but memory space is not contiguous - unused space between partitions • Reduce external fragmentation by compaction • Shuffle memory contents to place all free memory together in one large block. • Compaction is possible only if relocation is dynamic, and is done at execution time - uses run time binding

  27. LOGICAL VS. PHYSICAL MEMORY • Distinction necessary with run-time or execution-time binding • A logical address space is bound to a separate physicaladdress space • Logical address – generated by the CPU from instructions; also referred to as virtual address. • Physical address – address seen by the memory address register • Logical and physical addresses are the same in compile-time and load-time address-binding schemes • Logical (virtual) and physical addresses differ in execution-time address-binding scheme.

  28. MEMORY MANAGEMENT UNIT • A hardware device called a memory management unit (MMU) is required to map virtual to physical addresses on the fly. • Base register replaced by relocation register • In MMU scheme, the value in the relocation register is added to every logical address generated by a user process before is sent to memory address register • The user program deals with logical addresses; it never sees the real physical addresses.

  29. MULTIPLE FIXED PARTITIONS,RUN TIME BINDING

  30. MULTIPLE DYNAMIC PARTITIONS,RUN TIME BINDING

  31. MEMORY MANAGEMENT METHODS

  32. SEGMENTATION • Memory-management method that supports user view of memory. • A program is a collection of segments. • A segment is a logical unit in that program such as a: • main program, • sub-procedure or sub-routine, • function, • local variables space, • global variables space, • common block, • stack, symbol table, arrays, etc.

  33. 1 4 2 3 LOGICAL VIEW OF SEGMENTATION 1 2 3 4 user / programmer space physical memory space

  34. SEGEMENTATION ADDRESS SPACES • Logical address space for a program is a collection of variable length segments. • Each segment has a unique name or number • Logical address consists of a two tuple: <segment-number, offset> • A segment table maps two-dimensional physical addresses; each table entry has: • base: contains the starting physical address where a segment resides in physical memory. • limit: specifies the logical length of the segment.

  35. SEGMENTATION ADDRESSING

  36. OS SUPPORT FOR SEGMENTATION • A process’ PCB contains a pointer to the segment table. • During context switch this pointer loaded into STBR. • Segment-table base register (STBR) points to a processes segment table’s location in memory. • A process’ PCB contains a value which indicates the length of the segment table. • During a context switch this value loaded into STLR. • Segment-table length register (STLR) indicates number of segments used by a program • Depending on hardware support in CPU • Can load small # of segment table entries in special registers • Otherwise, each data request requires two memory accesses: 1 for segment table entry, 1 for data

  37. SEGMENTATION EXAMPLE

  38. ANOTHER EXAMPLE

  39. SEGMENTATION ISSUES • Cannot use compile or load time binding. Program code must be relocatable: • Run time binding - MMU calculates physical address from logical address two-tuple <segment #, offset> • Requires special compiler to • divide program into logical segments • create relocatable code • In addition, OS must support segmentation • Long term scheduler allocates memory for all segments, usually on a first-fit or best-fit basis: • Segments vary in length so memory allocation is example of dynamic storage-allocation problem • Can result in external fragmentation • Use compaction to combine fragments into larger, more usable space.

  40. MEMORY PROTECTION WITH SEGMENTATION • A segment-number is legal if # < STLR. • Segment bases and limits (in segment table) used to ensure memory accesses stay within a segment • Each entry in a segment table can include a validation bit and privilege information: • validation bit = 1  legal segment • validation bit = 0  illegal segment • May also include read/write/execute privileges

  41. SHARING SEGMENTS

  42. A segment is shared if it is pointed to by segment table entries for different processes To share a segment: segment code can be self-referencing but not self-modifying (aka, pure code, reentrant code) For self referencing segments: processes may use different segment #s for a shared segment if references are indirect (e.g., PC-relative) processes must use same segment # for shared segment if references are direct and incorporate the segment # itself. SHARING SEGMENTS (cont.)

  43. SEGMENTATION (cont.) • Advantages: • Programs no longer require contiguous memory • Easier to load larger programs by breaking into parts • No internal fragmentation • Easier for programmer to think of segments/objects than memory as linear array. • Enforced access to segments (e.g., read-only segments) • Further improvement in memory utilization possible with: • Segment Sharing • Dynamic Loading of Segments

  44. SEGMENTATION (cont.) • Disadvantages: • External Fragmentation • Requires additional hardware / software to implement • Segmentation-aware compilers • Segment Tables & Registers • MMU capable of mapping logical to physical addresses • Associative Memory for faster segment table lookup. • Can require two memory accesses if segment table is large • Segment table lookup and address calculation can slow down execution • Entire program must reside in memory • assuming no sharing or dynamic loading of pages • length of program <= size of memory

  45. PAGING • Memory-management method that views program as “pages” or code segments of fixed and equal size. • Unlike segmentation, no functional division of program’s logical address space • Memory divided into “frames” with fixed size equal to that of pages. • Page size = Frame Size • OS maintains list of free frames • Page/Frame size is power of 2, between 512 bytes and 16 Megabytes • Most systems use page sizes between 4K and 8K bytes

  46. PAGING (cont.) • Program pages are loaded into free frames wherever they may exist in memory. • For basic paging, must have sufficient number of free frames to load entire program before execution can begin. • Set up a page table to help translate logical to physical addresses • page table relates logical page number to physical frame

  47. LOGICAL-TO-PHYSICAL PAGE MAPPING

  48. PAGING ADDRESS SPACES • Logical address consists of a two tuple: <page-number, offset> • Address generated by CPU is divided into: • Page number(p) – used as an index into a pagetable which contains base address of each page (frame) in physical memory. • Page offset(d) – combined with base address to define the physical memory address that is sent to the memory unit.

  49. PAGING ADDRESSING

  50. PAGING EXAMPLE • In this example, each page is 4 bytes. • To find location of “n” in memory: • n is in page 3 • page three is loaded in frame 2 with offset of 1 • Calculate physical addr. • frame * size + offset • 2 * 4 + 1 • 9 (physical location)

More Related