1 / 50

Topic 8: Memory Management

Topic 8: Memory Management. L & E: Pages 51-75 Tanenbaum: Pages 309-341. Introduction. If you have only one program and it is smaller than memory then there are no problems - use static addressing. But ..

giulia
Download Presentation

Topic 8: Memory Management

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Topic 8: Memory Management L & E: Pages 51-75 Tanenbaum: Pages 309-341

  2. Introduction • If you have only one program and it is smaller than memory then there are no problems - use static addressing. • But .. • If we want multiple programs need to be able to relocate programs and their data - this is tricky. • The dispatcher we built in lecture 2 used base and limit registers to enable code to be re-located.

  3. Our Dispatcher DISPATCH: STI LDA BASE_TABLE+ JNZ RESTORE_IT JUMPSUB LOADIT LDLIMIT LIMITTABLE+ JUMP STARTIT

  4. 1 2 Dispatcher 3 The Dispatcher in Operation High O/S Low

  5. 1 2 Dispatcher 1 3 The Dispatcher in Operation High O/S Low

  6. 2 2 Dispatcher 1 3 The Dispatcher in Operation High O/S Low

  7. 1 2 Dispatcher 3 The Dispatcher in Operation High O/S Low

  8. 1 2 Dispatcher 3 The Dispatcher in Operation High O/S Low

  9. 3 1 2 Dispatcher The Dispatcher in Operation O/S Low

  10. 1 2 Dispatcher 3 The Dispatcher in Operation High O/S Low

  11. 1 Dispatcher 2 3 The Dispatcher in Operation High O/S Low

  12. 1 3 Dispatcher 2 3 The Dispatcher in Operation High O/S Low

  13. 1 3 Dispatcher 2 The Dispatcher in Operation High O/S Low

  14. Some Terms • The reason process 3 couldn’t fit was because the memory had become fragmented. • The solution is to compact the processes currently running. • But ... compaction is very slow (lots of copy instructions).

  15. Reducing the Impact of Fragmentation • Selecting the optimum place in memory to start a process can help reduce the effect of fragmentation. • Four common solutions:- • Best Fit • Worst Fit • First Fit • Buddy

  16. 4 1 3 Best Fit • Put the process in the slot it fits best. 2 O/S

  17. 4 1 3 2 Best Fit O/S

  18. 3 4 1 2 Best Fit O/S

  19. 4 3 1 2 Best Fit O/S

  20. 4 1 3 Worst Fit • Put the process in the biggest slot. 2 O/S

  21. 4 2 1 3 Worst Fit O/S

  22. 3 4 2 1 Worst Fit O/S

  23. 3 2 1 4 Worst Fit O/S

  24. First Fit • Save time by not searching through the lists, just put it in the first biggest slot. Buddy • Divide memory into segments of size 2n • Create smaller segments by dividing larger segments when required.

  25. Swapping • Suppose in the original example program one hasn’t finished - just been blocked....

  26. 2 2 Dispatcher 1 3 The Dispatcher in Operation High O/S Low

  27. 1 2 Dispatcher 3 The Dispatcher in Operation High O/S Low

  28. Swapping • Suppose in the original example program one hasn’t finished - just been blocked. • We can still run process 3 if we can shift process 1 out of memory while it is blocked. • Best (only) place to put it is on disk - this is called swapping.

  29. Summary of Approach so Far • Use Base and Limit registers to support multiple processes. • Swap blocked processes onto disk to free up memory. • Pick appropriate placement strategy to minimise effect of fragmentation. • Compact when necessary.

  30. Shortcomings • Limited to the physical memory size of the machine. • Can’t support any data sharing. • Better approach is based on the idea of paging.

  31. Paging - A Real Important Concept • The basic idea - divide all memory up into page frames. • Divide processes up into pages. • Maintain a table which allows you to map addresses to the appropriate page/page frame. • Very widely used - 486 etc. processors have paging support built in.

  32. 2 3 1 Paging .. How It Works Process A Memory

  33. 2 3 1 Paging .. How It Works Process A Memory 1

  34. 2 3 1 2 1 Paging .. How It Works Process A Memory

  35. 2 3 1 2 1 Paging .. How It Works Process A Memory

  36. 2 3 1 2 Paging .. How It Works Process A Memory 1

  37. 2 3 1 2 3 Paging .. How It Works Process A Memory 1

  38. 2 3 1 2 3 Paging .. How It Works Process A Memory 1

  39. 2 3 1 3 Paging .. How It Works Process A Memory 2 1

  40. 2 3 1 1 3 Paging .. How It Works Process A Memory 2 1

  41. The Page Table • The Page Table holds the mapping of pages to page frames. • When a memory location must be accessed it is translated into a page number and an offset within that page. • If the page is currently in a page frame on disk a page fault occurs. • Performance depends on number of page faults.

  42. Placement and Replacement Policies • Placement doesn’t matter because you can’t subdivide pages. • Replacement is critical because you want to avoid swapping out pages which will be accessed again soon. • Three common strategies:- • Least Recently Used (LRU). • Least Frequently Used (LFU). • First In First Our (FIFO).

  43. Least Recently Used • Replace the page which hasn’t been accessed for the longest time. • Assumes future access patterns will be the same as previous access patterns. • Need to record the sequence of page accesses.

  44. Least Frequently Used • Replace the page which has been used least frequently. • As with LRU, assumes future access patterns will be the same as previous access patterns. • Fiddling necessary to make sure recently loaded pages aren’t swapped straight back out again. • Need to record the number of times a page is accessed.

  45. First In First Out • Replace the page which has been resident the longest. • Ignores the fact this might be a crucial system page. • Generally performs a little worse than the other two.

  46. So Just How Well Does Paging Scale ? • There is an obvious question here about how many processes we can support effectively. • Denning (1970) identified the concept of a processes working set of pages. • This based on the principle of locality, i.e. program references tend to be grouped into small localities of address space and these change slowly over time.

  47. So Just How Well Does Paging Scale • Paging works so long as we can keep every processes working set in memory. • Otherwise we continually generate page faults, leading to a state of affairs known as thrashing.

  48. Thrashing Processor Utilisation No. of Processes

  49. Summary • Looked at the topic of Memory Management. • Highlighted the problem of fragmentation. • Discussed different placement strategies for fitting processes into memory. • Covered in detail the workings of paged systems.

  50. Coming Next Week • Interfacing to I/O Devices.

More Related