1 / 60

Memory Management

Memory Management. Managing Memory … The Simplest Case. 0xFFF …. The O/S. * Early PCs and Mainframes * Embedded Systems One user program at a time. The logical address space is the same as the physical address space. User Program. 0. But in Modern Computer Systems.

eldon
Download Presentation

Memory Management

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Memory Management

  2. Managing Memory … The Simplest Case 0xFFF … The O/S * Early PCs and Mainframes * Embedded Systems One user program at a time. The logical address space is the same as the physical address space. User Program 0

  3. But in Modern Computer Systems Modern memory managers subdivide memory to accommodate multiple processes. Memory needs to be allocated efficiently to pack as many processes into memory as possible When required a process should be able to have exclusive use of a block of memory, or permit sharing of the memory by multiple processes. Primary memory is abstracted so that a program perceives that the memory allocated to it is a large array of contiguously addressed bytes (but it usually isn’t).

  4. Four Major Concerns of a Memory Manager * Relocation * Protection * Sharing * Physical Organization

  5. Relocation • The Programmer does not know where the program will be placed in memory when it is executed. While the program is executing, it may be swapped to disk and returned to main memory at a different location (relocated). Memory references in the code must be translated to actual physical memory address

  6. Physical address space Process’s logical address space

  7. Protection With multiple processes in memory and running simultaneously” the system must protect one process from referencing memory locations in another process. This is more of a hardware responsibility than it is an operating system responsibility.

  8. Linking and Loading a Program Source Compiler All have their own: Program Segment Data Segment Stack Segment Relocatable Object Module All references to data or functions are resolved… Relocatable Object Modules Load Module Library Modules Linker

  9. Load Module Loader Process Image In Main Memory

  10. Absolute Loader no changes in addresses 0 0 program program data data Load Module Process Image In Main Memory

  11. Static Binding Add the offset to every address as the code is loaded. 0 Offset = 1000 program Jump 400 1000 program data Jump 1400 data Load Module All addresses in the code segment are relative to 0 Process Image In Main Memory

  12. Dynamic Run-time Binding 0 Offset = 1000 program Jump 400 program Addresses are maintained in relative format data Jump 400 Address translation takes place on the fly at run time. data Load Module All addresses in the code segment are relative to 0 Process Image In Main Memory

  13. Base Register 1000 jump, 400 400 relative address program Adder 1400 data Comparator absolute address Limit Register interrupt segment error!

  14. A Code Example . . . static int gVar; . . . int function_a(int arg) { . . . gVar = 7; libFunction(gVar); . . . } static variables are stored in the data segment. generated code will be stored in the code segment. libFunction( ) is defined in an external module. At compile time we don’t know the address of its entry point

  15. relative addresses 0000 . . . . . . 0008 entry function_a . . . 0220 load r1, 7 0224 store r1, 0036 0228 push 0036 0228 call libFunction . . . 0400 External Reference Table . . . 0404 “libFunction” ???? . . . 0500 External Definition Table . . . 0540 “function_a” 0008 . . . 0600 Symbol Table . . . 0799 End of Code Segment . . . 0036 [space for gVar] . . . 0049 End of Data segment A Code Example . . . static int gVar; . . . int function_a(int arg) { . . . gVar = 7; libFunction(gVar); . . . } Code Segment compile Data Seg

  16. 0000 . . . . . . 0008 entry function_a . . . 0220 load r1, 7 0224 store r1, 0036 0228 push 0036 0228 call libFunction . . . 0400 External Reference Table . . . 0404 “libFunction” ???? . . . 0500 External Definition Table . . . 0540 “function_a” 0008 . . . 0600 Symbol Table . . . 0799 (end of code segment) . . . 0036 [space for gVar] . . . 0049 (end of data segment) libFunction Contains an external definition table Indicating the relative entry point link relative addresses 0000 (other modules) . . . 1008 entry function_a . . . 1220 load r1, 7 1224 store r1, 0136 1228 push 1036 1232 call 2334 . . . 1399 (end of function_a) . . . (other modules) 2334 entry libFunction . . . 2999 (end of code segment) . . . 0136 [space for gVar] . . . 1000 (end of data segment) Code Segment Object File Data seg

  17. 0000 (other modules) . . . 1008 entry function_a . . . 1220 load r1, 7 1224 store r1, 0136 1228 push 0136 1232 call 2334 . . . 1399 (end of function_a) . . . (other modules) 2334 entry libFunction . . . 2999 (end of code segment) . . . 0136 [space for gVar] . . . 1000 (end of data segment) real addresses 4000 (other modules) . . . 5008 entry function_a . . . 5220 load r1, 7 5224 store r1, 7136 5228 push 7136 5232 call 6334 . . . 5399 (end of function_a) . . . (other modules) 6334 entry libFunction . . . 6999 (end of code segment) . . . 7136 [space for gVar] . . . 8000 (end of data segment) loader static Bind (offset 4000) Load Module

  18. Sharing Memory Multiple processes (fork) running the same executable Shared memory

  19. Physical Organization The flow of information between the various “levels” of memory.

  20. Computer memory consists of a large array of words or bytes, each with its own address. Registers built into the CPU are typically accessible in one clock cycle. Most CPUs can decode an instruction and perform one or more simple register operations in one clock cycle. The same is not true of memory operations which can take many clock cycles.

  21. fast but expensive 1 machine cycle Registers Cache RAM Disk Optical, Tape, etc cheap but slow

  22. Memory Allocation Before an address space can be bound to physical addresses, the memory manager must allocate the space in real memory where the address space will be mapped to. There are a number of schemes to do memory allocation.

  23. Fixed Partitioning • Equal-size fixed partitions • any process whose size is less than • or equal to the partition size can be • loaded into an available partition • if all partitions are full, the operating • system can swap a process out of a partition • a program may not fit in a partition. • Then the programmer must design the • program with overlays

  24. Fixed Partitioning Main memory use is inefficient. Any program, no matter how small, occupies an entire partition. This is called internal fragmentation. But . . . It’s easy to implement.

  25. Placement Algorithm with Fixed Size Partitions • Equal-size partitions • because all partitions are of equal size, it does • not matter which partition is used • Placement is trivial.

  26. Fixed Partition with Different Sizes Example is OS/360 MFT. The operator fixed the partition sizes at system start up. Two options: * Separate Input Queues * Single Input Queue

  27. 800K Partition 4 700K Multiple Input Queues Jobs are put into the queue for the smallest partition big enough to hold them. Partition 3 400K Disadvantage? Partition 2 Memory can go unused, even though there are jobs waiting to run that would fit. 200K Partition 1 100K O/S 0

  28. 800K Partition 4 When a partition becomes free pick the first job on the queue that fits. 700K Single Input Queue Partition 3 Disadvantage? 400K Small jobs can be put into much larger partitions than they need, wasting memory space. Partition 2 200K Partition 1 100K O/S 0

  29. 800K Partition 4 Alternative Solution – scan the whole queue and find the job that best fits. 700K Single Input Queue Partition 3 Disadvantage? 400K Discriminates against small jobs. Starvation. Partition 2 200K Partition 1 100K O/S 0

  30. CPU Utilization From a probabalistic point of view …. Suppose that a process spends a fraction p of its time waiting for I/O to complete. With n processes in memory at once, the probability that all n processes are waiting for I/O (in which case the CPU is idle) is p n. CPU utilization is therefore given by the formula CPU Utilization = 1 – p n

  31. Consider the case where processes spend 80% of their time waiting for I/O (not unusual in an interactive end-user system where most time is spent waiting for keystrokes). Notice that it requires at least 10 processes to be in memory to achieve a 90% CPU utilization.

  32. Suppose you have a computer that has 32MB of memory and that the operating system uses 16MB. If user programs average 4MB we can then hold 4 jobs in memory at once. With an 80% average I/O wait CPU utilization = 1 – 0.84 = approx 60% Predicting Performance Adding 16MB of memory allows us to have 8 jobs in memory at once So CPU utilization = 1 - .88 = approx 83% Adding a second 16MB would only increase CPU utilization to 93%

  33. Dynamic Partitioning Partitions are of variable length and number. A process is allocated exactly as much memory as it requires. Eventually you get holes in the memory. This is called external fragmentation. You must use compaction to shift processes so they are contiguous and all free memory is in one block.

  34. For Example … 8M O/S 56M

  35. 8M O/S Process 1 20M 36M

  36. 8M O/S Process 1 20M 14M Process 2 18M Process 3 8M

  37. 8M O/S Process 1 20M Process 4 10M 4M 18M Process 3 8M

  38. 8M O/S 16M Process 5 4M Process 4 10M Fragmentation! 4M 18M Process 3 8M

  39. Periodically the O/S could do memory compaction – like disk compaction. Copy all of the blocks of code for loaded processes into contiguous memory locations, thus opening larger un-used blocks of free memory. The problem is that this is expensive!

  40. A related question: How much memory do you allocate to a process when it is created or swapped in? In most modern computer languages data can be created dynamically.

  41. This may come as a surprise…. Dynamic memory allocation with malloc, or new, does not really cause system memory to be dynamically allocated to the process. In most implementations, the linker anticipates the use of dynamic memory and reserves space to honor such requests. The linker reserves space for both the process’s run-time stack and it’s heap. Thus a malloc( ) call returns an address within the existing address space reserved for the process. Only when this space is used up does a system call to the kernel take place to get more memory. The address space may have to be rebound – a very expensive process. The Heap

  42. Managing Dynamically Allocated Memory When managing memory dynamically, the operating system must keep track of the free and used blocks of memory. Common methods used are bitmaps and linked lists.

  43. Keep List in order sorted by address Linked List Allocation P 0 5 P 8 6 P 14 4 H 5 3 H 18 2 P 26 3 H 29 3 X P 20 6 Length Hole Memory is divided up into some number of fixed size allocation units. Starts at Process

  44. Linked List Allocation P 0 5 P 8 6 P 14 4 H 5 3 H 18 2 P 26 3 H 29 3 X P 20 6 Length Hole When this process ends, just merge this node with the hole next to it (if one exists). We want contiguous blocks! Starts at

  45. Linked List Allocation P 0 5 P 8 6 P 14 4 H 5 3 H 18 8 P 26 3 H 29 3 X Length Hole Starts at When blocks are managed this way, there are several algorithms that the O/S can use to find blocks for a new process, or one being swapped in from disk.

  46. Dynamic Partitioning Placement Algorithms Best-fit algorithm Search the entire list and choose the block that is the smallest that will hold the request. This algorithm is the worst performer overall. Since the smallest possible block is found for a process this algorithm tends to leave lots of tiny holes that are not useful.

  47. tiny hole smallest block that process will fit in

  48. Dynamic Partitioning Placement Algorithms Worst-fit – a variation of best fit This scheme is like best fit, but when looking for a new block it picks the largest block of unallocated memory. The idea is that external fragmentation will result in bigger holes, so it is more likely that another block will fit.

  49. big hole Largest block of unallocated memory

  50. Dynamic Partitioning Placement Algorithms First-fit algorithm Finds the first block in the list that will fit. May end up with many process loaded in the front end of memory that must be searched over when trying to find a free block

More Related