1 / 65

Memory Management

Memory Management. G.Anuradha. Outline. Background Logical versus Physical Address Space Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging. Background. Program must be brought into memory and placed within a process for it to be executed.

Download Presentation

Memory Management

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Memory Management G.Anuradha

  2. Outline • Background • Logical versus Physical Address Space • Swapping • Contiguous Allocation • Paging • Segmentation • Segmentation with Paging

  3. Background • Program must be brought into memory and placed within a process for it to be executed. • Input Queue - collection of processes on the disk that are waiting to be brought into memory for execution. • User programs go through several steps before being executed.

  4. Virtualizing Resources • Physical Reality: Processes/Threads share the same hardware • Need to multiplex CPU (CPU Scheduling) • Need to multiplex use of Memory (Today) • Why worry about memory multiplexing? • The complete working state of a process and/or kernel is defined by its data in memory (and registers) • Consequently, cannot just let different processes use the same memory • Probably don’t want different processes to even have access to each other’s memory (protection)

  5. Important Aspects of Memory Multiplexing • Controlled overlap: • Processes should not collide in physical memory • Conversely, would like the ability to share memory when desired (for communication) • Protection: • Prevent access to private memory of other processes • Different pages of memory can be given special behavior (Read Only, Invisible to user programs, etc) • Kernel data protected from User programs • Translation: • Ability to translate accesses from one address space (virtual) to a different one (physical) • When translation exists, process uses virtual addresses, physical memory uses physical addresses

  6. Names and Binding • Symbolic names  Logical names  Physical names • Symbolic Names: known in a context or path • file names, program names, printer/device names, user names • Logical Names: used to label a specific entity • inodes, job number, major/minor device numbers, process id (pid), uid, gid.. • Physical Names: address of entity • inode address on disk or memory • entry point or variable address • PCB address

  7. Binding of instructions and data to memory • Address binding of instructions and data to memory addresses can happen at three different stages. • Compile time: • If memory location is known apriori, absolute code can be generated; must recompile code if starting location changes. • Load time: • Must generate relocatable code if memory location is not known at compile time. • Execution time: • Binding delayed until runtime if the process can be moved during its execution from one memory segment to another. Need hardware support for address maps (e.g. base and limit registers).

  8. Binding time tradeoffs • Early binding • compiler - produces efficient code • allows checking to be done early • allows estimates of running time and space • Delayed binding • Linker, loader • produces efficient code, allows separate compilation • portability and sharing of object code • Late binding • VM, dynamic linking/loading, overlaying, interpreting • code less efficient, checks done at runtime • flexible, allows dynamic reconfiguration

  9. Multi-step Processing of a Program for Execution • Preparation of a program for execution involves components at: • Compile time (i.e., “gcc”) • Link/Load time (unix “ld” does link) • Execution time (e.g. dynamic libs) • Addresses can be bound to final values anywhere in this path • Depends on hardware support • Also depends on operating system • Dynamic Libraries • Linking postponed until execution • Small piece of code, stub, used to locate appropriate memory-resident library routine • Stub replaces itself with the address of the routine, and executes routine

  10. Dynamic Loading • Routine is not loaded until it is called. • Better memory-space utilization; unused routine is never loaded. • Useful when large amounts of code are needed to handle infrequently occurring cases. • No special support from the operating system is required; implemented through program design.

  11. Dynamic Linking • Linking postponed until execution time. • Small piece of code, stub, used to locate the appropriate memory-resident library routine. • Stub replaces itself with the address of the routine, and executes the routine. • Operating system needed to check if routine is in processes’ memory address.

  12. Overlays • Keep in memory only those instructions and data that are needed at any given time. • Needed when process is larger than amount of memory allocated to it. • Implemented by user, no special support from operating system; programming design of overlay structure is complex.

  13. Overlaying

  14. Memory Partitioning • An early method of managing memory • Pre-virtual memory which is not used much now • Virtual Memory • But, it will clarify the later discussion of virtual memory if we look first at partitioning • Virtual Memory has evolved from the partitioning methods

  15. Types of Partitioning • Fixed Partitioning • Dynamic Partitioning • Simple Paging • Simple Segmentation • Virtual Memory Paging • Virtual Memory Segmentation

  16. Fixed Partitioning- Equal size partitions • Any process whose size is less than or equal to the partition size can be loaded into an available partition • The operating system can swap a process out of a partition • If none are in a ready or running state

  17. Fixed Partitioning Problems • A program may not fit in a partition. • The programmer must design the program with overlays • Main memory use is inefficient. • Any program, no matter how small, occupies an entire partition. • This is results in internal fragmentation. Fragmentation generally happens when the memory blocks have been allocated and are freed randomly. This results in splitting of a partitioned memory (on the disk or in main memory) into smaller non-contiguous fragments.

  18. Fixed partitioning – Unequal Size Partitions • Lessens both problems • but doesn’t solve completely • In Fig • Programs up to 16M can be accommodated without overlay • Smaller programs can be placed in smaller partitions, reducing internal fragmentation

  19. Placement Algorithm • Equal-size • Placement is trivial (no options) • Unequal-size • Can assign each process to the smallest partition within which it will fit • Queue for each partition • Processes are assigned in such a way as to minimize wasted memory within a partition

  20. Fixed Partitioning

  21. Remaining Problems with Fixed Partitions • The number of active processes is limited by the system • i.e limited by the pre-determined number of partitions • A large number of very small process will not use the space efficiently • In either fixed or variable length partition methods OBSOLETE Early IBM Mainframe OS, OS/MFT

  22. Dynamic Partitioning • Partitions are of variable length and number • Process is allocated exactly as much memory as required

  23. Dynamic Partitioning Example OS (8M) • External Fragmentation • Memory external to all processes is fragmented • Can resolve using compaction • OS moves processes so that they are contiguous • Time consuming and wastes CPU time P1 (20M) P2 (14M) Empty (6M) P2 (14M) P4(8M) Empty (56M) Empty (6M) P3 (18M) Empty (4M) Refer to Figure 7.4

  24. Dynamic Partitioning • Operating system must decide which free block to allocate to a process • Best-fit algorithm • Chooses the block that is closest in size to the request • Worst performer overall • Since smallest block is found for process, the smallest amount of fragmentation is left • Memory compaction must be done more often

  25. Dynamic Partitioning • First-fit algorithm • Scans memory from the beginning and chooses the first available block that is large enough • Fastest • May have many process loaded in the front end of memory that must be searched over when trying to find a free block

  26. Dynamic Partitioning • Next-fit • Scans memory from the location of the last placement • More often allocates a block of memory at the end of memory where the largest block is found • The largest block of memory is broken up into smaller blocks • Compaction is required to obtain a large block at the end of memory

  27. Dynamic Partitioning • Worst Fit:- • Allocate the largest hole. • Produces the largest leftover hole, which may be more useful than the smaller leftover hole from a best-fit approach.

  28. Memory Allocation Policies Example: Parking Space Management • A scooter, car and a truck are looking out for space for parking. They arrive in the order mentioned above. The parking spaces are available for each one of them as per their size. Truck parking space can accommodate , a car and a scooter or even two scooters. Similarly, In a car parking space two scooters can be parked. 4

  29. Memory Allocation Policies Alongside is shown the partition in the parking area for Truck, Car and Scooter. Now when a scooter, car and truck come in order, parking space is allocated according to algorithm policies 5

  30. Worst Fit Next Fit Best Fit

  31. Memory Allocation Policies Now take another theoretical example. • Given the partition of 100K,500K,200K,300K,600K as shown, the different algorithms will place the processes 212K,417K,112K,426K respectively. • The request for 426K will be rejected in case of next fit and worst fit algorithm because any single partition is less than 426K

  32. Next Fit (212k,417k,112k) The request for 426K will be rejected Similarly we cant implement for Other Two Policies

  33. 212K-Green 417K-Blue 112K-Pink 426K-Yellow External Fragmentation-Gray Unused Partitions-White Next Fit Best Fit Worst Fit 7

  34. Worst Fit Best fit Next Fit 100K 200K 200K 200K 200K 200K 200K 200K 100K 100K 100K 100K 100K 100K 100K 300K 300K 300K 300K 300K 300K 300K 300K 300K 100K 400K 100K 400K 400K 500K 500K 500K 500K 500K 500K 500K 100K 300K 600K 600K 600K 600K 600K 600K 600K 300K User selects the algorithm in order of worst fit, best fit and, next fit. Worst Fit Next Fit Best Fit 100K 300K 300K User input = 300K process 100K sizes 400K 400K 400K 100K 400K 100K 100K 300K 300K

  35. Dynamic memory Partitioning with User interactivity User entered 5 processes which allotted in memory Empty Memory 300 k 400 k 500 k 800 k

  36. User Selected Best Fit New process given by user 450 k External Fragmentation Best Fit User entered 5 processes allotted in memory 300 k 300 k 400 k 400 k 450K 500 k 500 k 800 k 800 k

  37. User Selected Worst Fit New process given by user 450 k External Fragmentation 450K User entered 5 processes allotted in memory Worst Fit 300 k 300 k 400 k 400 k 500 k 500 k 350K 800 k 800 k

  38. User Selected Next Fit New process given by user 450 k External Fragmentation User entered 5 processes allotted in memory Next Fit 300 k 300 k 450K 400 k 400 k 500 k 500 k 800 k 800 k

  39. Memory Allocation Policies • Which is the best placement algorithm with respect to fragmentation? Worst-fit algorithm is the best placement algorithm with respect to fragmentation because it results in less amount of fragmentation. • Which is the worst placement algorithm respect to time complexity ? Best-fit is the worst placement algorithm respect to time complexity because it scans the entire memory space resulting in more time. 9

  40. Best-fit, first-fit, and worst fit-memory allocation method for fixed partitioning

  41. Memory Block Size Block 1 50k Block 2 200k Block 3 70k Block 4 115k Block 5 15k Best-fit memory allocation makes the best use of memory space but slower in making allocation jobs 1 to 5 are submitted and be processed first After the first cycle, job 2 and 4 located on block 5 and block 3 respectively and both having one turnaround are replace by job 6 and 7 while job 1, job 3 and job 5 remain on their designated block. In the third cycle, job 1 remain on block 4, while job 8 and job 9 replace job 7 and job 5 respectively

  42. First-fit memory allocation is faster in making allocation but leads to memory waste. Scans memory from the beginning and chooses the first available block that is large enough F I R S T - F I T

  43. Worst-fit memory allocation is opposite to best-fit. It allocates free available block to the new job and it is not the best choice for an actual system W O R S T - F I T

More Related