1 / 34

Virtual Memory

Explore the advantages, mechanisms, and terminology of virtual memory, including page faults, page tables, context switching, and replacement schemes. Learn how virtual memory allows efficient memory management in computer systems.

furlow
Download Presentation

Virtual Memory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Virtual Memory

  2. Virtual Memory • The main memory can act as a cache for the secondary storage • Advantages: • Illusion of having more physical memory • even a single user program can exceed the size of main memory. • Protection • safe and efficient sharing of memory. • Program relocation • Simplifies loading the program for execution. The relocation allows us to load the program into any location in main memory.

  3. Virtual addresses Physical addresses Disk addresses Basic Mechanism address translation

  4. Terminology • Page: Virtual memory block. • Page fault: Virtual memory miss (i.e. a requested page is not in the main memory; retrieve it from the disk). • Physical address: The address that can directly be used to access main memory. • Virtual address: The address that the CPU produces, which must be translated to physical address. • Memory mapping or address translation: The process of calculating the physical address given the virtual address.

  5. Virtual Address 31 30 29 28 27 ……………… 15 14 13 12 11 10 9 8 …….. 3 2 1 0 Virtual Page Number Page Offset Translation 29 28 27 ……………… 15 14 13 12 11 10 9 8 …….. 3 2 1 0 Physical Page Number Page Offset Physical Address Address Translation Page size is 4 KB

  6. Page Fault • A page fault can take millions of cycles to process. • Design choices: • Pages should be large enough to amortize the high access time. 4 KB to 16 KB are typical sizes. 32KB and 64KB pages are becoming popular. • embedded systems have 1KB page size • Fully associative placement of pages. • Page faults can be handled in software because the overhead is small compared to the disk access time. • Write-through will not work efficiently; instead write-back (copy-back) is used.

  7. Page Tables • Fully associative placement is used for pages. • Full search is impractical. • Pages are located using a full table, called page table, that indexes the memory. • Page table resides in memory and indexed with the page number from the virtual address. • Page table contains the corresponding physical page number. • Each program has its own page table, which maps virtual address space of the program to the main memory.

  8. Page Tables • To indicate the location of the page table in memory, the hardware include a register, called page table register, that points to the start of the page table. • For sake of simplicity, assume for the time being that the page table is in a fixed and contiguous area of memory. • Page table contains a mapping for every possible virtual page • no tags are required.

  9. Virtual Address 11 10 9 8 … 3 2 1 0 31 30 29 28 27 … 15 14 13 12 Page offset Virtual page number 20 Physical page number V 12 Page Table 18 11 10 9 8 … 3 2 1 0 29 28 27 … 15 14 13 12 If 0 then page is not in memory Page offset Physical page number Page Tables Page table register

  10. Context Switching • The page table, PC, and the registers specifythe state of a program, often referred as process. • The OS saves the state of a program in order to allow another program to use the CPU. • The OS makes a process active by loading the process’s state. • The process’s address space, and all the data it can access in memory, is defined by its page table, which resides in memory. • The OS loads the page table register to point to the page table of the process it wants to make active.

  11. Page Faults • If a page fault occurs, the OS must be given control (through exception). • The OS finds the page in disk and decides where to place the requested page in main memory. • The location of the page on disk must also be kept track of in virtual memory systems. • The OS usually creates the space on disk for all the pages of a process when it creates the process (swap space) • It also creates a data structure to record where each virtual page stored on disk. • This data structure may be part of the page table or may be an auxiliary data structure indexed in the same way.

  12. Page Table V Physical memory 1 1 1 1 0 1 1 0 1 1 0 1 Disk Storage Physical page or disk address Virtual pagenumber Finding the Virtual Pages

  13. Replacement Schemes in VM • The OS creates a data structure that records which processes and which virtual addresses use each physical page. • In a page fault if all the pages in main memory are in use, the OS must choose a page to replace. • LRU replacement scheme is usually implemented. • Every page has a use bit or reference bit which is set whenever a page is accessed. • The OS periodically clears reference bits. • If reference bit of a page is 0, it means that this page is not accessed during a particular period of time.

  14. Size of Page Tables • With 32-bit virtual address, 4 KB pages, and 4 bytes per page table entry, the total page table size: • 4 MB of memory for page table of each program in execution at any time • There may be tens to hundreds of active programs on a machine at the same time.

  15. Page Tables PDBR 1024 PTE’s Page table 0 PDE 0 PDE 1 1024 PTE’s Page table 1 PDE 1023 Page Directory 1024 PTE’s Page table 1023 Smaller Page Tables • Page tables are stored in virtual memory rather than in physical memory • Page tables are also subject to paging. • Two-level table lookup (Pentium)

  16. Virtual Address VP # Offset VP # Entry Chain PP # Offset Hashing Page Table Hash Table 3. Inverted Hash Table Page table is as large as the size of the number of physical pages in main memory

  17. Writing to the Pages • Virtual memory systems use write-back scheme to write the updated pages to the disk. • A dirty bit is added to the page table to keep track whether a page has been written since it was read into memory. • When the OS chooses to replace a page, it checks the dirty bit to decide whether it is necessary to write back this page to the disk.

  18. Translation-Lookaside Buffer (TLB) • Every memory access results in at least two accesses • One access to obtain the physical address • Second access to get the data • To improve, take advantage of locality of references to the page table • References to the words on that page have both temporal and spatial locality • Modern machines include a special cache, Translation-Lookaside Buffer (TLB), that keeps track of recently used translations. • A TLB is a cache that holds only page table mappings.

  19. Physical page number V Tag VPN Physical memory 1 1 1 0 0 1 TLB Page Table V 1 1 1 Disk Storage 1 0 1 1 0 1 1 0 1 Physical page or disk address Translation-Lookaside Buffer (TLB)

  20. TLB • Tag holds a portion of the virtual page numbers • Each data entry in TLB holds a physical page number. • TLB entries include additional information such as the reference bit and dirty bit. • When a miss occurs we need to determine if it is a page fault or merely a TLB miss. • If it is only a TLB miss, the CPU loads the translation from the page table into the TLB and tries the reference again. • TLB misses are much more frequent than the page faults, which are handled through exceptions.

  21. TLB • When replacing an entry in TLB, only the reference and dirty bits of this entry are copied back to page table. • Typical values for a TLB • 16-512 entries • Block size: 1-2 page table entries • Hit time: 0.5-1 clock cycle • Miss penalty: 10-100 clock cycles • Miss rate: 0.01% - 1%. • Fully-associative mapping in a TLB is not too costly when the TLB is small. • Random methods for replacing TLB entries provide reasonable performances.

  22. Virtual Address 11 10 9 8 … 3 2 1 0 31 30 29 28 27 … 15 14 13 12 Page offset Virtual page number 20 12 Physical page number D Tag V = = TLB Hit TLB = = = = Page offset Physical page number block offset Physical address tag Cache index 2 8 Data Tag V 16 Cache 4 = Cache Hit 32 Data IntrinsityFastMATH TLB 16 entry

  23. TLB Access No Yes TLB Hit? TLB miss exception Physical address Yes Try to write data to cache No Write? Try to read data from cache Write access bit on? No Yes Write protectionexception No Cache Hit? Yes Cache miss stall No Cache miss stall Cache Hit? Deliver datato the CPU write data into cache update the dirty bit Yes Reads & Writes Virtual address

  24. Hits & Misses Cache TLB Virtual Memory Possible? If so, under what circumstances? miss hit hit Possible, although the page table is never really checked if TLB hits hit miss hit TLB misses, but the entry found in page table; after retry, data is found in the cache miss miss hit TLB misses, but entry found in the page table; after retry, data misses in cache. miss miss miss TLB misses, and followed by a page fault; after retry the data must miss in cache miss hit miss Impossible; cannot have a translation in TLB if page is not present in memory hit hit miss Impossible; cannot have a translation in TLB if page is not in memory hit miss miss Impossible; data cannot be allowed in cache if page is not in memory

  25. Indexing Cache in VM 1/3 • The cache is usually physically indexed. • Address translation • cache is accessed. • No need to flush cache content on context switch. • No aliasing problems. • No need for PID field.

  26. 7fff fffc Static Data 1000 0000 Text pc  0040 0000 Reserved 0 MIPS Memory Allocation sp  Stack Dynamic Data gp  1000 8000

  27. Indexing Cache in VM 2/3 • As an alternative, cache can be indexed by virtual address and it can be virtually tagged. • Address translation hardware (e.g. TLB) is not used. • When a cache miss occurs, address translation is needed. • PID is needed. • Two virtual addresses for the same page, aliasing, may occur (i.e. a word on such a page can be cached in two different locations).

  28. 31 … 12 11 0 Page address tag page offset index block offset TLB Cache hit + block number Physical Page no Indexing Cache in VM 3/3 • As a compromise, cache can be virtually indexedbut physically tagged. • Address translation and cache indexing take place simultaneously. Virtual Address

  29. Protection in VM • Each process has its own (virtual) address space and a process is not allowed to access other processes’ address spaces in normal circumstances. • OS guarantees that the independent virtual pages map to disjoint physical pages. • In order to implement this, a process shouldn’t be allowed to change table mapping. • The OS prevents a user process from modifying its own page tables. • The OS, however, must be able to do so. • Page tables are placed in the address space of OS

  30. Protection in VM • The hardware provides three basic capabilities: • Support at least two mode: • User process • OS process (kernel, supervisor or executive process). • Portion of CPU state inaccessible by user processes • user/supervisor bit, page table pointer, TLB • Provide a mechanism whereby the CPU can go from user mode to supervisor mode, and vice versa. • System callexception implemented as a special instruction (e.g. syscall in MIPS) • Return from exception(ERET) instruction to restore the state of the process that generated exception.

  31. Data Sharing & Context Switch • When two processes share information in a limited way, the OS must assist them. • Write access bitis used to restrict the sharing to just reading. • Access bits must be included both in TLB and page table. • Context (process) switchingfrom P1 to P2 • It suffices to change Page Table Register to point to P2’s page table. • But the TLB must be reloaded with entries for P2. • A process (task) identifier (PID)is added to VA. • 8-bit ID in Intrinsity FastMATH • A PID is sometimes included in cache for similar reasons.

  32. Handling TLB Miss • A new TLB entry is created retrieving the physical address from the page table • Can be handled either in software or hardware. • MIPS does in software (exception) • Takes about 13 clock cycles TLBmiss: mfc0 $k1, Context #copy address of PTE into $k1 lw $k1, 0($k1) #put PTE into $k1 mtc0 $k1, EntryLo #put PTE into EntryLo tlbwr #put EntryLo into TLB at random eret #return from TLB miss exception

  33. Page Fault • The page is not in memory. • detected if the valid bit in page table entry is off. • Handled in software. • Page fault must be recognized in the same clock cycle as the memory access; • otherwise lw would destroy the content of the destination register. • In the next clock cycle, the exception must start. • In an exception due to a page fault, the OS saves the entire state of the active process: • all the general-purpose integer and floating-point registers, the page table register, the EPC, the exception Cause register.

  34. Exceptions Due to Page Faults • Three steps are involved: • Find the location of the referenced page on disk • Choose a physical page to replace • If the chosen page is dirty, write it back to the disk. • Bring the referenced page from disk to memory. • The second and last steps may take millions of cycles  context switch • MIPS instructions are restartable. • But, in some processors, restartable instructions are impossible (e.g. block move instructions)

More Related