virtual memory
Skip this Video
Download Presentation
Virtual Memory

Loading in 2 Seconds...

play fullscreen
1 / 43

Virtual Memory - PowerPoint PPT Presentation

  • Uploaded on

Virtual Memory. Early computers had a small and fixed amount to memory. All programs had to be able to fit in this memory. Overlays were used when the program was to big. The programmer was responsible for bringing in the overlays and managing the overlay process.

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'Virtual Memory' - ruana

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
virtual memory
Virtual Memory
  • Early computers had a small and fixed amount to memory.
    • All programs had to be able to fit in this memory.
    • Overlays were used when the program was to big.
    • The programmer was responsible for bringing in the overlays and managing the overlay process.
    • In 1961, researchers at Manchester, England proposed a technique for automating the overlay process, relieving the programmer of this job. This method is now known as virtual memory.
    • By the early 1970s, it was available on most computers.
  • The idea of the Manchester group was to separate the concepts of address space and memory locations.
    • The number of addressable words depends only on the number of bits used in the address, not on the amount of physical memory.
    • The address space is the set of all possible addresses.
    • At any one time, only a subset of the addresses in the address space are actually in physical memory, the other addresses are on secondary storage. A mapping shows where any address actually resides.
  • Now what happens if we attempt to address some location which is not currently in physical memory?
    • The contents of main memory are saved on disk.
    • The chunk of addresses containing the address we want are located on disk.
    • This chunk is read into main memory.
    • The address map is changed to reflect this new reality.
    • Execution continues as though nothing has happened.
  • This technique is called paging and the chunks of memory read in from disk are called pages.
  • A more sophisticated way of mapping addresses from the address space onto actual memory addresses is also possible.
    • A memory map or page table relates virtual addresses to physical addresses.
    • We presume that there is enough room on disk to store the entire virtual address space.
    • Programs are written as if the entire virtual address space is available.
  • Paging gives the programmer the illusion of a large, continuous, linear main memory, the same size as the virtual address space.
  • In reality, the physical memory may be smaller (or larger) than the virtual address space.
  • The paging mechanism is transparent to the programmer.
implementation of paging
Implementation of Paging
  • The virtual address space is broken up into a number of equal-sized pages.
    • Page sizes ranging from 512 to 64K bytes are common.
    • Sizes as large as 4 MB are used occasionally.
    • The page size is always a power of 2.
    • The physical address space is broken up into pieces in a similar way, each piece being the size of a page.
    • These pieces of main memory into which the pages go are called page frames.
implementation of paging8
Implementation of Paging
  • Every computer with virtual memory has a device for doing the virtual-to-physical mapping.
  • This device is called the MMU (Memory Management Unit).
    • It may be on the CPU chip, or it may be on a separate chip that works closely with the CPU chip.
    • Since our sample MMU maps from a 32-bit virtual address to a 15-bit physical address, it needs a 32-bit input register and a 15-bit output register.
implementation of paging9
Implementation of Paging
  • On the following slide, the MMU is presented with a 32-bit virtual address.
  • It separates the address into a 20-bit virtual page number and a 12-bit offset within the page (because the pages are 4K).
    • The virtual page number is used as an index into the page table to find the entry for the page referenced.
    • In the example, the virtual page number is 3, so entry 3 of the page table is selected, as shown.
  • The first thing the MMU does is check to see if the page referenced is in main memory.
implementation of paging11
Implementation of Paging
  • Not all virtual pages can be in memory at once.
  • The MMU makes this check by examining the present/absent bit in the page table entry.
  • In the example, the bit is 1, meaning the page is currently in memory.
  • Now, the page frame value from the selected entry (6 in this case) is copied into the upper 3 bits of the 15-bit output register.
  • In parallel with this operation, the low-order 12-bits of the virtual address (the page offset field) are copied into the low-order 12 bits of the output register.
demand paging and the working set model
Demand Paging and the Working Set Model
  • When a reference is made to a page not present in main memory, it is called a page fault.
    • After a page fault has occurred, it is necessary for the OS to read in the required page from the disk, enter its new physical memory location in the page table, and then repeat the instruction that caused the fault.
    • In this way, we can start a program running on a machine with virtual memory even when none of the program is in main memory.
  • This method of operating a virtual memory is known as demand paging.
demand paging and the working set model15
Demand Paging and the Working Set Model
  • An alternative approach is based on the observation that most programs do not reference their address space uniformly but that the references tend to cluster on a small number of pages.
  • At any instant in time, there exists a set consisting of all the pages used by the k most recent memory references.
  • This set is called the working set of the program and can be loaded when the program is restarted.
page replacement policy
Page Replacement Policy
  • Ideally, the set of pages that a program is actively and heavily using, called the working set, can be kept in memory to reduce page faults.
  • In order to assure this, the OS should attempt to replace pages which will not be used in the near future.
  • One popular algorithm is the LRU (Least Recently Used) algorithm.
    • This usually works well, but there are certain pathological cases.
page size and fragmentation
Page Size and Fragmentation
  • If the user’s program and data happen to fill an integral number of pages exactly, there will be no wasted space when they are in memory.
    • If not, there will be wasted space on the last page.
    • This is called internal fragmentation (because the wasted space is internal to some page).
    • A small page size means less internal fragmentation.
    • On the other hand, this requires a larger page table and more page faults.
    • Small page sizes also make inefficient use of the disk bandwidth.
  • For some problems, it is better to have multiple virtual address spaces rather than a single one.
  • Imagine a compile which generates many tables:
    • Symbol table
    • Source text
    • Floating-point and integer constants
    • Parse tree
    • Stack
  • In a one-dimensional memory, the five tables have to be allocated contiguous chunks of virtual address space.
  • One solution to this problem is to provide many completely independent address spaces, called segments.
    • Different segments may, and usually do, have different lengths.
    • A segment is a logical entity which the programmer is aware of and uses as a single logical entity.
    • It usually does not contain a mixture of different types.
    • Segments can also be used for each procedure.
implementation of segmentation
Implementation of Segmentation
  • Segmentation can be implemented in one of two ways: swapping and paging.
  • In the former, some set of segments are in memory at a given instant.
    • If a reference to a segment not currently in memory, that segment is brought into memory.
    • One or more segments may have to be written to the disk.
    • This is like demand paging, but pages are fixed size and segments are not.
    • External fragmentation can result because of this.
implementation of segmentation25
Implementation of Segmentation

(a)-(d) Development of external fragmentation.

(e) Removal of the external fragmentation by compaction.

implementation of segmentation26
Implementation of Segmentation
  • The second approach is to divide each segment up into fixed size pages and demand page them.
  • In this scheme, some of the pages of a segment may be in memory and others may not be.
  • To page a segment, a separate page table is needed for each segment.
    • Since a segment is just a linear address space, all the techniques we have seen so far for paging apply to each segment.
    • The only new feature here is that each segment gets its own page table.
virtual memory on the pentium 4
Virtual Memory on the Pentium 4
  • The Pentium 4 has a sophisticated virtual memory system that supports demand paging, pure segmentation, and segmentation with paging.
  • The heart of the Pentium 4 virtual memory consists of two tables: the LDT (Local Descriptor Table) and the GDT (Global Descriptor Table).
    • Each program has its own LDT, but there is a single GDT, shared by all programs.
virtual memory on the pentium 429
Virtual Memory on the Pentium 4
  • The LDT describes segments local to each program, including its code, data, stack, and so on, whereas the GDT describes system segments, including the OS itself.
  • To access a segment, a Pentium 4 program first loads a selector for that segment into one of the segment registers.
  • Each selector is a 16-bit number.
  • One of the selector bits tells whether the segment is local or global.
  • Thirteen other bits specify the LDT or GDT entry number. The other 2 bits relate to protection.
virtual memory on the pentium 431
Virtual Memory on the Pentium 4
    • Descriptor 0 is invalid and causes a trap if used.
    • At the time a selector is loaded into a segment register, the corresponding descriptor is fetched from the LDT or GDT and stored in internal MMU registers so it can be accessed quickly.
    • A descriptor consists of 8 bytes, including the segment’s base address, size, and other information.
  • The format of the selector makes locating the descriptor easy.
    • First either LDT or GDT is selected, based on bit 2.
    • Then the selector is copied to an MMU scratch register, and the 3 low-order bits are set to 0.
virtual memory on the pentium 432
Virtual Memory on the Pentium 4
    • Now the address of either the LDT or GDT table (kept in internal MMU registers) is added to it, to give a direct pointer to the descriptor.
  • A (selector, offset) pair is converted to a physical address.
    • As soon as the hardware knows which segment register is being used, it can find the complete descriptor corresponding to that selector in its internal registers.
    • If the segment does not exist (selector 0) or is currently not in memory (P is 0), a trap occurs.
virtual memory on the pentium 434
Virtual Memory on the Pentium 4
  • It then checks to see if the offset is beyond the end of the segment, in which case a trap also occurs.
  • If the G (Granularity) field is 0, the LIMIT field is the exact segment size in bytes, up to 1 MB.
  • If it is 1, the LIMIT field gives the segment size in pages.
  • The Pentium 4 page size is never smaller than 4 KB, so 20 bits is enough for segments up to 232 bytes.
  • Assuming that the segment is in memory and the offset is in range, the Pentium 4 then adds the 32-bit BASE field in the descriptor to the offset to form a linear address.
virtual memory on the pentium 436
Virtual Memory on the Pentium 4
  • If paging is disabled (by a bit in a global control register), the linear address is interpreted as the physical address and sent to the memory to read or write.
    • Thus, with paging disabled we have a pure segmentation scheme with each segment’s base address given in its descriptor.
  • If paging is enabled, the linear address is interpreted as a virtual address and mapped onto the physical address using page tables.
    • A two-level mapping is used.
virtual memory on the pentium 437
Virtual Memory on the Pentium 4
    • Each running program has a page directory consisting of 1024 32-bit entries.
    • It is located at an address pointed to by a global register.
    • Each entry points to a page table also containing 1024 32-bit entries.
    • The page table entries point to page frames.
  • To avoid making repeated references to memory, the Pentium 4 MMU has special hardware support to look up the most recently used DIR-PAGE combinations quickly.
virtual memory on the pentium 439
Virtual Memory on the Pentium 4
  • If an application does not need segmentation, all segment registers can be set up with the same selector, whose descriptor has BASE = 0 and LIMIT set to the maximum.
  • The Pentium 4 also supports four protection levels, with level 0 being the most privileged and level 3 the least.
    • At each instant, a running program is at a certain level, indicated by a 2-bit field in its PSW (Program Status Word).
    • Each segment also belongs to a certain level.
virtual memory on the pentium 441
Virtual Memory on the Pentium 4
  • As long as a program restricts itself to using segments at its own level, everything works fine.
  • Attempts to access data at a higher level are permitted.
  • Attempts to access data at a lower level are illegal and cause traps.
virtual memory on the ultrasparc iii
Virtual Memory on the UltraSPARC III
  • Virtual to physical page mappings on the UltraSPARC III
virtual memory on the ultrasparc iii43
Virtual Memory on the UltraSPARC III

Data structures used in translating virtual addresses on the UltraSPARC.