1 / 220

Chapter 9, Virtual Memory Overheads, Part 1 Sections 9.1-9.5

Chapter 9, Virtual Memory Overheads, Part 1 Sections 9.1-9.5. 9.1 Background. Remember the sequence of developments of memory management from the last chapter: 1. The simplest approach:

azana
Download Presentation

Chapter 9, Virtual Memory Overheads, Part 1 Sections 9.1-9.5

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 9, Virtual MemoryOverheads, Part 1Sections 9.1-9.5

  2. 9.1 Background • Remember the sequence of developments of memory management from the last chapter: • 1. The simplest approach: • Define a memory block of fixed size large enough for any process and allocate such a block to each process (see MISC) • This is tremendously rigid and wasteful

  3. 2. Allocate memory in contiguous blocks of varying size • This leads to external fragmentation and a waste of 1/3 of memory (for N units allocated, .5 N are wasted) • There is overhead in maintaining allocation tables down to the byte level

  4. 3. Do (simple) paging. • Memory is allocated in fixed size blocks • This solves the external fragmentation problem • This also breaks the need for allocation of contiguous memory • The costs as discussed so far consist of the overhead incurred from maintaining and using a page table

  5. Limitations of Paging—Loading Complete Programs • Virtual memory is motivate by several limitations in the paging scheme as presented so far • One limitation is that it’s necessary to load a complete program for it to run

  6. These are examples of why it might not be necessary to load a complete program: • 1. Error handling routines may not be called during most program runs • 2. Arrays of predeclared sizes may never be completely filled

  7. 3. Other routines besides error handling may also be rarely used • 4. For a large program, even if all parts are used at some time during a run, by definition, they can’t all be used at the same time • This means that at any given time the complete program doesn’t have to be loaded

  8. Reasons for wanting to be able to run a program that’s only partially loaded • 1. The size of a program is limited to the physical memory on the machine • Given current memory sizes, this by itself is not a serious limitation, although in some environments it might still be

  9. 2. For a large program, significant parts of it may not be used for significant amounts of time. • If so, it’s an absolute waste to have the unused parts loaded into memory • Even with large memory spaces, conserving memory is desirable in order to support multi-tasking

  10. 3. Another area of saving is in loading or swapping cost from secondary storage • If parts of a program are never needed, reading and writing from secondary storage can be saved • In general this means leaving more I/O cycles available for useful work

  11. Not having to load a complete program also means: • A program will start faster when initially scheduled because there is less I/O for the long term scheduler to do • The program will be faster and less wasteful during the course of its run in a system that does medium term scheduling or swapping

  12. Limitations of Paging—Fixed Mapping to Physical Memory • There is also another, in a sense more general, limitation to paging as presented so far: • The idea was that once a logical page was allocated a physical frame, it didn’t move • It’s true that medium term scheduling, swapping, and compaction may move a process, but this has to be specially supported • Once scheduled and running, a process’s location in memory doesn’t change

  13. If page locations are fixed in memory, that implies a fixed mapping between the logical and physical address space throughout a program run • More flexibility can be attained if the logical and physical address spaces are delinked

  14. The idea is that at one time a logical page would be at one physical address, at another time it would be at another • Run-time address resolution would handle finding the correct frame for a page when needed

  15. Definition of Virtual Memory • Definition of virtual memory: • The complete separation of logical memory space from physical memory space from the programmer’s point of view

  16. At any given time during a program run, any page, p, in the logical address space could be at any frame, f, in the physical memory space • Only that part of a program that is running has to have been loaded into main memory from secondary storage

  17. Not only could any page, p, in the address space be at in any frame, f, at run time • Any logical address, on some page p, could still be located in secondary storage at any point during a run when that address isn’t actually be accessed

  18. Virtual Memory and Segmentation and Paging • Both segmentation and paging were mentioned in the last chapter • In theory, virtual memory can be implemented with segmentation • However, that’s a mess • The most common implementation is with paging • That is the only approach that will be covered here

  19. 9.2 Demand Paging • If it’s necessary to load a complete process in order for it to run, then there is an up-front cost of swapping all of its pages in from secondary storage to main memory • It it’s not necessary to load a complete process in order for it to run, then a page only needs to be swapped into main memory if the process generates an address on that page • This is known as demand paging

  20. In general, when a process is scheduled it may be given an initial allocation of frames in memory • From that point on, additional frames may be allocated through demand paging • If a process is not even given an initial footprint and it acquires all of its memory through paging, this is known as pure demand paging

  21. Demand Paging Analogy with TLB • Demand paging from secondary storage to main memory is roughly analogous to what happens on a miss between the page table and the TLB • Initially, the TLB can be thought of as empty • The first time the process generates an address on a given page, that causes a TLB miss, and the page entry is put into the TLB

  22. With pure demand paging, you can think of the memory allocation of a process as being “empty” • The attempt to access an unloaded page can be thought of a miss • This miss is what triggers the allocation of a frame in memory to that page

  23. The Separation of Logical and Physical Address Space • An earlier statement characterized virtual memory as completely separating the logical and physical address spaces • Another way to think about this is that from the point of view of the logical address space, there is no difference between main memory and secondary storage

  24. In other words, the logical address space may refer to parts of programs which have been loaded into memory and parts of programs which haven’t • Accessing memory that hasn’t been loaded is slower, but the loading is handled by the system

  25. From the point of view of the address, the running process doesn’t know or care whether it’s in main memory or secondary storage • The MMU and the disk management system work together to provide transparent access to the logical address space of a program • The IBM AS/400 is an example of a system where addresses literally extended into the secondary storage space

  26. Maximum Address Space • The maximum address space is limited by the machine architecture • It is defined by how many bits are available for holding an address • The amount of installed main memory might be less than the maximum address space

  27. If so, then the address space extends into secondary storage • Virtual memory effectively means that secondary storage functions as a transparent extension of physical main memory

  28. Support for Virtual Memory and Demand Paging • From a practical point of view, it becomes necessary to have support to tell which pages have been loaded into physical memory and which have not • This is part of the hardware support for the MMU

  29. In the earlier discussions of page tables, the idea of a valid/invalid bit was introduced • Under that scheme, the page table was long enough to accommodate the maximum number of allocated frames • If a process wasn’t allocated the maximum, then page addresses outside of its allocation were marked invalid

  30. The scheme can be extended for demand paging: • Valid means valid and in memory. • Invalid means either invalid or not loaded

  31. Under the previous scheme, if an invalid page was accessed, a trap was generated, and the running process was halted due to an attempt to access memory outside of its range • Under the new scheme, an attempt to access an invalid page also generates a trap, but this is not necessarily an error • The trap is known as a page fault trap

  32. This is an interrupt which halts the user process and triggers system software which does the following: • 1. It checks a table to see whether the address was really invalid or just not loaded • 2. If invalid, it terminates the process

  33. 3. If valid, it gets a frame from the list of free frames (the frame table), allocates it to the process, and updates the data structures to show that the frame is allocated to page x of the process • 4. It schedules (i.e., requests) a disk operation to read the page from secondary storage into the allocated frame

  34. 5. When the read is complete, it updates the data structures to show that the page is now valid (among other things, setting the valid bit) • 6. It allows the user process to restart on exactly the same instruction that triggered the page fault trap in the first place

  35. Note two things about the sequence of events outlined above. • First: • Restarting is just an example of context switching • By definition, the user process’s state will have been saved • It will resume at the IP value it was on when it stopped • The difference is that the page will now be in memory and no fault will result

  36. Second: • The statement was made, “get a frame from the list of free frames”. • You may be wondering, what if there are no free frames? • At that point, memory is “over-allocated”. • That means that it’s necessary to take a frame from one process and give it to another • This is an important consideration that will be covered in detail later

  37. Demand Paging and TLB’s as a Form of Caching • Demand paging from secondary storage to main memory is analogous to bringing an entry from the page table to the TLB • Remember that the TLB is a specialized form of cache • Its effectiveness relies on locality of reference • If references were all over the map, it would provide no benefit

  38. In practice, memory references tend to cluster in certain areas over certain periods of time, and then move on • This means that entries remain in the TLB and remain useful over a period of time

  39. The logic of bringing pages from secondary storage to memory is also like caching • Pages that have been allocated to frames should remain useful over time, and can profitably remain in those frames • Pages should tend not to be used only once and then have to be swapped out immediately because another page is needed and over-allocation of memory has occurred

  40. Hardware Support for Demand Paging • Basic hardware support for demand paging is the same as for regular paging • 1. A page table that records valid/invalid pages • 2. Secondary storage—a disk. • Recall that program images are typically not swapped in from the file system. • The O/S maintains a ready queue of program images in the swap space, a.k.a., the backing store

  41. Problems with Page Faults • A serious problem can occur when restarting a user process after a page fault • This is not a problem with context switching per se • It is a problem that is reminiscent of the problems of concurrency control

  42. An individual machine instruction actually consists of multiple sub-parts • Each of the sub-parts may require memory access • Thus, a page fault may occur on different sub-parts

  43. Memory is like a shared resource • When a process is halted mid-instruction, it may leave memory in an inconsistent state • One approach to dealing with this is to roll back any prior action a process has taken on memory before restarting it • Another approach is to require that a process acquire all memory needed before taking any further action

  44. Instruction execution can be broken down into these steps: • 1. Fetch the instruction • 2. Decode the instruction • 3. Fetch operands, if any • 4. Do the operation (execute) • 5. Write the results, if any

  45. A page fault can occur on the instruction fetch • In other words, during execution, the IP reaches a value that hasn’t been loaded yet • This presents no problem • The page fault causes the page containing the next instruction to be loaded • Then execution continues on that instruction

  46. A page fault can also occur on the operand fetches • In other words, the pages containing the addresses referred to by the instruction haven’t been loaded yet • A little work is wasted on a restart, but there are no problems • The page fault causes the operand pages to be loaded (making sure not to replace the instruction page) • Then execution continues on that instruction

  47. Specific Problem Scenario • In some hardware architectures there are instructions which can modify more than one thing (write >1 result). • If the page fault occurs in the sequence of modifications, there is a potential problem • Whether the problem actually occurs is simply due to the vagaries of scheduling and paging • In other words, this is a problem like interrupting a critical section

  48. The memory management page fault trap handling mechanism has to be set up to deal with this potential problem • The book gives two concrete examples of machine instructions which are prone to this • One example was from a DEC (rest in peace) machine. • It will not be pursued

  49. The other example comes from an IBM instruction set • There was a memory move instruction which would cause a block of 256 bytes to be relocated to a new address

  50. Memory paging is transparent • Application programs simply deal in logical addresses • There is not, and there should not be any kind of instruction where an application program has to know or refer to its pages/frames when doing memory operations

More Related