1 / 29

Ch. 4 Memory Mangement

Ch. 4 Memory Mangement. Parkinson’s law: “Programs expand to fill the memory available to hold them.”. Memory hierarchy. Registers Cache RAM How can the OS help processes share RAM? Hard disk CD, DVD Tape. Basic memory management. Types:

noellej
Download Presentation

Ch. 4 Memory Mangement

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Ch. 4 Memory Mangement Parkinson’s law: “Programs expand to fill the memory available to hold them.”

  2. Memory hierarchy • Registers • Cache • RAM • How can the OS help processes share RAM? • Hard disk • CD, DVD • Tape

  3. Basic memory management • Types: • Moves process back and forth between main memory and disk (swapping and paging). • Those that do not (move processes back and forth).

  4. Basic memory management • Monoprogramming w/out swapping or paging • only 1 program runs at a time • OS is also in memory • load  execute to completion  load next Embedded systems, palmtop computers Early mainframes MS DOS

  5. Basic memory management • Multiprogramming w/ fixed partitions • Divide memory into n fixed size partitions • All the same size or some larger, some smaller? • Single job queue or multiple job queues?

  6. Basic memory management • Multiprogramming w/ fixed partitions issues: • Unused partition space is wasted. • Multiple queues • partitions (often larger) may go unused • Single queue • Large partitions wasted on small jobs • Or if we favor larger jobs, smaller (often interactive) jobs may be starved.

  7. Modeling multiprogramming Let p be the probability that a process waits on I/O. Given n such processes, what is the probability that all n processes are waiting on I/O at the same time? p1 * p2 * … * pn = pn Therefore, for a given p and n, the CPU utilization = 1-pn (Assumes that all n processes are independent but that is not the case for 1 CPU or if we need exclusive I/O! But we’ll employ the “ostrich algorithm and live with it!)

  8. CPU utilization • Given that we have enough memory to support 10 processes and each process spends 80% of its time doing I/O, what’s CPU utilization?

  9. CPU utilization • Given that we have enough memory to support 10 processes and each process spends 80% of its time doing I/O, what’s CPU utilization? • Given n=10, p=0.80 • So CPU utilization = 1-0.810 • (about 0.90 or 90%)

  10. CPU utilization

  11. CPU utilization • Suppose we have 32MB. The OS uses 16MB. Each user program uses 4MB and has an 80% I/O wait. • How many users, n, can we support in memory at once? • Given the above n, what is our CPU utilization?

  12. CPU utilization • Suppose we have 32MB. The OS uses 16MB. Each user program uses 4MB and has an 80% I/O wait. • How many users, n, can we support in memory at once? • 4 = (32-16)/4 • Given the above n, what is our CPU utilization? • (1-0.804)=0.60 or 60% • What is our CPU utilization if we add 16M?

  13. CPU utilization • Now we have 48MB. The OS uses 16MB. Each user program uses 4MB and has an 80% I/O wait. • How many users, n, can we support in memory at once? • 8 = (48-16)/4 • Given the above n, what is our CPU utilization? • (1-0.808)=0.83 or 83% • So we went from 60% to 83% with 16M more. • What is our CPU utilization if we add 16M?

  14. CPU utilization • Now we have 64MB. The OS uses 16MB. Each user program uses 4MB and has an 80% I/O wait. • How many users, n, can we support in memory at once? • 12 = (64-16)/4 • Given the above n, what is our CPU utilization? • (1-0.8012)=0.93 or 93% • So we went from 83% to 93% with 16M more.

  15. Relocation and protection • Relocation – a program should be able to execute in any partition of memory (starting at any physical address) • Protection – a process should have read/write access to data memory, read access to its own code memory, read access to some parts of the OS, and no access to other parts • Base/limit registers = early method

  16. swapping • We want more processes than memory! 2 solutions: • Swapping • bring in each process in its entirety • run it for a while • put it back on disk • Virtual memory (paging)

  17. swapping • Memory compaction (like disk fragmentation) • memory may become fragmented into little pieces so we may have to more all processes down into lowest memory.

  18. swapping

  19. swapping • What if the memory needs of a process changes over time?

  20. swapping • Memory management – How do we keep track of what memory is being used and what memory is available? • Bitmaps • Linked lists

  21. Swapping: memory management w/ bitmaps • Divide memory into equal size allocation units (e. g., 1K “chunks”). • Bit = 0 means the chunk is free; bit = 1 means that chunk is in use. • Small chunks -> large bitmap • Large chunks -> small bitmap • Large chunks -> waste

  22. Swapping: memory management w/ linked lists • Linked list of allocated and free memory segments. • Segment = memory used by process or hole (free memory) between processes • Usually sorted by address • May be implemented as one list (of both used and free) as two separate lists

  23. Swapping: memory management w/ linked lists • Allocation methods: • First fit • Next fit • Best fit • Worst fit • Quick fit

  24. Swapping: memory management w/ linked lists • Allocation methods: • First fit • start at beginning and use the first one that fits. • simple • fast • leaves large holes • Next fit • Best fit • Worst fit • Quick fit

  25. Swapping: memory management w/ linked lists • Allocation methods: • First fit • Next fit • continue searching from where FF last left off • slightly worse than FF • Best fit • Worst fit • Quick fit

  26. Swapping: memory management w/ linked lists • Allocation methods: • First fit • Next fit • Best fit • find the closest match (leave the smallest hole) • slower than FF • wastes memory = leaves many small, useless holes • Worst fit • Quick fit

  27. Swapping: memory management w/ linked lists • Allocation methods: • First fit • Next fit • Best fit • Worst fit • always leave largest hole • not very good • Quick fit

  28. Swapping: memory management w/ linked lists • Allocation methods: • First fit • Next fit • Best fit • Worst fit • Quick fit • keeps separate lists of common hole sizes • search is extremely fast • But when a process terminates (or is swapped out), merging neighboring holes is expensive. • If merging is not done, then fragmentation occurs.

  29. Next: virtual memory

More Related