1 / 87

Course Overview Principles of Operating Systems

Introduction Computer System Structures Operating System Structures Processes Process Synchronization Deadlocks CPU Scheduling. Memory Management Virtual Memory File Management Security Networking Distributed Systems Case Studies Conclusions.

Download Presentation

Course Overview Principles of Operating Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction Computer System Structures Operating System Structures Processes Process Synchronization Deadlocks CPU Scheduling Memory Management Virtual Memory File Management Security Networking Distributed Systems Case Studies Conclusions Course OverviewPrinciples of Operating Systems

  2. Motivation Objectives Background System Requirements Virtual Memory Page Replacement Algorithms FIFO Least Recently Used Clock Algorithms Frame Allocation Thrashing Working Set Model Page Fault Frequency Implementation Issues Important Concepts and Terms Chapter Summary Chapter Overview Virtual Memory

  3. Motivation • not all parts of a process image are needed • at all times during the execution • every time the program is run • unnecessary parts can be kept on secondary storage (hard disk) • they need to be brought into main memory when needed • improves the utilization of main memory • fewer infrequently used sections of main memory • more processes can be accommodated • must be managed carefully to avoid substantial decrease in performance

  4. Objectives • realize the limitations of memory management without virtual memory • understand the basic techniques of virtual memory • fetch methods for requested pages • page replacement methods • allocation of frames to processes • be aware of the tradeoffs and limitations • locality of reference • predicting future page references • evaluate the performance impact of virtual memory on the overall system • effective access time

  5. Background • limitations of systems without virtual memory • memory is not fully utilized • sometimes overlays are used by programmers to time-share parts of main memory • lower degree of multiprogramming • there is not enough physical memory to accommodate all processes • program size • the size of programs (process images) is limited by the available physical memory • level of abstraction • the programmer must be aware of hardware details like the size of physical memory

  6. Processes in Main Memory • it is not really necessary to keep the complete process image always in main memory • currently used code • currently used (user) data structures • system data (heap, stack) • some parts of the process image may be kept on secondary storage • must be swapped in when needed

  7. Virtual Memory • principle • hardware requirements • software components • data structures • advantages and problems

  8. Virtual Memory Principle • a technique that allows the execution of processes that are not completely in main memory • separates logical memory (as viewed by the process) from physical memory

  9. System Diagram CPU Main Memory Hard Disk Control Unit Registers Arithmetic Logic Unit (ALU) System Bus [David Jones]

  10. Main Memory Hard Disk Virtual Memory Diagram Process Page Table Process Control Block 1 34 2 58 3 122 4 68 5 99 6 38 7 55 Program 8 43 9 131 10 102 11 171 12 76 13 123 Data 14 144 15 93 User Stack Shared Address Space Reference Bit

  11. Hardware Requirements • virtual memory usually is supported by a memory management unit (MMU) • minimum requirements • address conversion support • as in paging or segmentation • additional entry in page tables • present bit (valid/invalid bit)

  12. Software Components • paging or segmentation • to keep track of the parts of processes and their locations • swapper • loads and unloads parts of processes between main memory and hard disk • fetch algorithm • determines which parts of the process should be brought into main memory • demand paging is the most frequently used one • page replacement algorithm • determines which parts are swapped out when memory needs to be freed up

  13. Data Structures • pages or segments • parts of the process image handled by virtual memory • page frames • sections of physical memory that hold pages • page tables • keep track of the allocation of pages to frames • free page frame list • list of all frames currently not in use • replacement algorithm tables • contain data about pages and frames • used to decide which pages to replace • swap area • secondary memory area for pages not currently in main memory • a complete image of every process is kept here

  14. Advantages and Problems • advantages • large virtual address space • processes can be larger than physical memory • better memory utilization • only the parts of a process actually used are kept in main memory • less I/O for loading or swapping the process • only parts of the process need to be transferred • problems • complex implementation • additional hardware, components, data structures • performance loss • overhead due to VM management • delay for transferring pages

  15. Page Faults • an interrupt/trap is generated if a memory reference leads to a page that is currently not in main memory • may be caused by any memory access(instructions, user data, system data) • additional information must be kept in the page table to indicate if a process is currently in main memory • present bit • valid/invalid bit may be used • the page needs to be loaded before execution can continue

  16. Page Fault Handling • a page fault trap is generated if the present bit is not set • a page frame is allocated • if there are no free frames left, the page replacement algorithm must be invoked • a page is read into the page frame • the page table is updated • the instruction is restarted

  17. Page Fault Times assumptions: 100 MHz CPU clock cycle, 20 ms average access and transfer time per page

  18. Page Fault Rate • an instruction that takes normally tens of nanoseconds will take tens of milliseconds if a page fault occurs • a factor of 100,000 longer • this is an intolerable slowdown • the frequency of page faults is very important for the performance of the system • the page fault rate must be kept very low

  19. Page Faults and EAT • effective access time (p = page fault rate) • EAT = p * access with page fault + (1-p) access without page fault

  20. Page Faults and Performance • what is the performance loss for a page fault rate of 1 in 100,000? • access time without page fault: 100 ns • memory access time including page table and TLB effects • access time with page fault0.5 * 20 ms + 0.5 * 40 ms = 30 ms • page replacement for 50% of page faults • EAT = 0.00001 * 30 ms + (1- 0.00001) * 100 ns = 300 ns + 99.999999 ns ≈ 400 ns • this is a 300% performance loss

  21. Page Faults and Performance • what page fault rate is needed for a 10% performance loss? • same conditions as previous example • a 10% performance loss corresponds to a 110 ns EAT • 110 ns > p * 30 ms + (1- p) * 100 ns > p * 30,000,000 ns + 100ns 10 ns > p * 30,000,000 ns p < 10/30,000,000 < 1/3,000,000 = 1 in 3 million • only one out of 3 million memory accesses may cause a page fault

  22. Locality of Reference • locality of reference indicates that the next memory access will be in the vicinity of the current one • spatial • successive memory accesses will be in the same neighborhood • temporal • the same memory location(s) will be accessed repeatedly • locality of reference is the main reason why the number of page faults can be low

  23. Reasons for Locality of Reference • instruction execution • the execution of a program proceeds largely in sequence • except for branch and call instructions • iterative constructs repeat a rather small number of instructions several times • data access • many data structures are accessed in sequence • list, array, tree • may depend on programming style and the code generated by the compiler

  24. Data Structures and Locality • good locality • stack, queue, array, record • medium locality • linked list, tree • bad locality • graph, pointer

  25. Fetch Policy • determines which pages will be brought into main memory from secondary storage • easy if it is known which pages will be used next • in practice this is not known • the most popular approach is demand paging • pages are fetched when needed • this is indicated by a page fault • an alternative is prepaging (anticipative paging) • an attempt is made to load pages before they are actually needed • is not always feasible

  26. Demand Paging • an attempt to access a page that is currently not in main memory generates a page fault • in response to the page fault, the respective page is loaded into main memory • requires an available frame

  27. Prepaging • anticipative paging • pages that are likely to be used in the near future are loaded before they are needed • this saves the waiting time for the page to be brought in • especially with DMA it can be performed concurrently with program execution • can depend on secondary storage policies • contiguously stored pages are advantageous because of lower seek times

  28. Placement Policy • tries to identify a “good” location for a program part to be brought into main memory • not a problem with paging • each page fits in every frame • for (pure) segmentation it is essentially a variation of the general memory allocation problem • find a fitting hole for the segment to be brought in

  29. Replacement Policy • identifies a frame to be freed when a page fault occurs, but no free frame is available • the page occupying that frame must be swapped out to secondary storage • can be based on a number of algorithms • FIFO, least recently used, clock-based, etc.

  30. Replacement Algorithm Objective • ideally, the page being replaced should be the page least likely to be referenced in the near future • in practice, this is not possible because future page references are unknown • many algorithms try to predict the future by looking into the past • assuming that the past behavior of a process is similar to its future behavior • locality of reference enables the connection between the past history and the future

  31. Page Replacement Algorithms • first-in, first-out (FIFO) algorithm • optimal algorithm • least recently used (LRU) algorithm • LRU approximation algorithms

  32. Evaluation of Algorithms • keep the number of page faults as low as possible • the performance of page replacement algorithms is often compared on the basis of a reference string • the reference string indicates the sequence in which pages are used • it is derived from a trace of the addresses accessed in a system • if several addresses point to the same page, only one entry for the reference string is generated • depends on the number of available frames • with more frames, the number of page faults decreases until all pages can be accommodated

  33. First-In, First-Out • the page that was brought in first will be replaced first • must keep track of the time when a page is loaded • this can also be done in a FIFO queue • very simple, but not very good • the oldest page may have been used very recently • there is a good chance that the replaced page will be needed again shortly

  34. FIFO Example • seven pages • four frames Reference String 4 3 1 5 1 2 3 6 7 4 2 5 6 1 3 4 7 4 4 4 4 4 2 2 2 2 2 2 5 5 5 5 5 5 3 3 3 3 3 3 6 6 6 6 6 6 1 1 3 3 1 1 1 1 1 1 7 7 7 7 7 7 3 3 3 5 5 5 5 5 5 4 4 4 4 4 4 4 7 F F F F F F F F F F F F Page Faults: 12

  35. FIFO Example • seven pages • five frames Reference String 4 3 1 5 1 2 3 6 7 4 2 5 6 1 3 4 7 4 4 4 4 4 4 4 6 6 6 6 6 6 6 6 6 6 3 3 3 3 3 3 3 7 7 7 7 7 7 7 7 7 1 1 1 1 1 1 1 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 5 5 5 1 1 1 1 2 2 2 2 2 2 2 2 2 3 3 3 F F F F F F F F F F Page Faults: 10

  36. Optimal • the page that will not be used for the longest period is replaced • looks forward in the reference string (into the future) • provably optimal algorithm • impractical for real systems • can’t be implemented since it is not known when a page will be used again • used as a benchmark

  37. Optimal Example • seven pages • four frames Reference String 4 3 1 5 1 2 3 6 7 4 2 5 6 1 3 4 7 4 4 4 4 4 4 3 3 3 3 3 1 1 1 1 5 5 5 F F F F F Replacement Candidates: 4, 3, 1, 5 Selected: 1

  38. Optimal Example • seven pages • four frames Reference String 4 3 1 5 1 2 3 6 7 4 2 5 6 1 3 4 7 4 4 4 4 4 4 4 4 3 3 3 3 3 3 3 1 1 1 2 2 2 5 5 5 5 5 F F F F F Replacement Candidates: 4, 3, 2, 5 Selected: 3

  39. Optimal Example • seven pages • four frames Reference String 4 3 1 5 1 2 3 6 7 4 2 5 6 1 3 4 7 4 4 4 4 4 4 4 4 4 3 3 3 3 3 3 6 6 1 1 1 2 2 2 2 5 5 5 5 5 5 F F F F F F F Replacement Candidates: 4, 6, 2, 5 Selected: 3

  40. Optimal Example • seven pages • four frames Reference String 4 3 1 5 1 2 3 6 7 4 2 5 6 1 3 4 7 4 4 4 4 4 4 4 4 4 4 4 4 4 3 3 3 3 3 3 6 7 7 7 7 7 1 1 1 2 2 2 2 2 2 2 2 5 5 5 5 5 5 5 5 5 5 F F F F F F F F Replacement Candidates: 4, 7, 2, 5 Selected: 2 or 5 (both pages won’t be used anymore)

  41. Optimal Example • seven pages • four frames Reference String 4 3 1 5 1 2 3 6 7 4 2 5 6 1 3 4 7 4 4 4 4 4 4 4 4 4 4 4 4 4 4 3 3 3 3 3 3 6 7 7 7 7 7 7 1 1 1 2 2 2 2 2 2 2 6 6 5 5 5 5 5 5 5 5 5 5 5 F F F F F F F F F Replacement Candidates: 4, 7, 6, 5 Selected: 6 or 5 (pages won’t be used anymore)

  42. Optimal Example • seven pages • four frames Reference String 4 3 1 5 1 2 3 6 7 4 2 5 6 1 3 4 7 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 3 3 3 3 3 3 6 7 7 7 7 7 7 7 1 1 1 2 2 2 2 2 2 2 6 6 6 5 5 5 5 5 5 5 5 5 5 1 1 F F F F F F F F F Replacement Candidates: 4, 7, 6, 1 Selected: 6 or 1 (pages won’t be used anymore)

  43. Optimal Example • seven pages • four frames Reference String 4 3 1 5 1 2 3 6 7 4 2 5 6 1 3 4 7 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 3 3 3 3 3 3 6 7 7 7 7 7 7 7 7 7 1 1 1 2 2 2 2 2 2 2 6 6 3 3 3 5 5 5 5 5 5 5 5 5 5 1 1 1 1 F F F F F F F F F F Page Faults: 10

  44. Least Recently Used (LRU) • the page that has not been used for the longest period is replaced • looks backward in the reference string (into the past) • similar to the optimal algorithm, but practical • requires additional information about the pages • last usage (time stamp, ordering of the pages)

  45. LRU Example • seven pages • four frames Reference String 4 3 1 5 1 2 3 6 7 4 2 5 6 1 3 4 7 4 4 4 4 4 4 3 3 3 3 3 1 1 1 1 5 5 5 F F F F F Replacement Candidates: 4, 3, 1, 5 Selected: 4

  46. LRU Example • seven pages • four frames Reference String 4 3 1 5 1 2 3 6 7 4 2 5 6 1 3 4 7 4 4 4 4 4 2 2 2 3 3 3 3 3 3 3 1 1 1 1 1 1 5 5 5 5 5 F F F F F F Replacement Candidates: 2, 3, 1, 5 Selected: 5

  47. LRU Example • seven pages • four frames Reference String 4 3 1 5 1 2 3 6 7 4 2 5 6 1 3 4 7 4 4 4 4 4 2 2 2 2 3 3 3 3 3 3 3 3 1 1 1 1 1 1 1 5 5 5 5 6 6 F F F F F F F Replacement Candidates: 2, 3, 1, 6 Selected: 1

  48. LRU Example • seven pages • four frames Reference String 4 3 1 5 1 2 3 6 7 4 2 5 6 1 3 4 7 4 4 4 4 4 2 2 2 2 2 3 3 3 3 3 3 3 3 3 1 1 1 1 1 1 7 7 5 5 5 5 6 6 6 F F F F F F F F Replacement Candidates: 2, 3, 7, 6 Selected: 2

  49. LRU Example • seven pages • four frames Reference String 4 3 1 5 1 2 3 6 7 4 2 5 6 1 3 4 7 4 4 4 4 4 2 2 2 2 4 4 3 3 3 3 3 3 3 3 3 3 1 1 1 1 1 1 7 7 7 5 5 5 5 6 6 6 6 F F F F F F F F F Replacement Candidates: 4, 3, 7, 6 Selected: 3

  50. LRU Example • seven pages • four frames Reference String 4 3 1 5 1 2 3 6 7 4 2 5 6 1 3 4 7 4 4 4 4 4 2 2 2 2 4 4 4 3 3 3 3 3 3 3 3 3 2 2 1 1 1 1 1 1 7 7 7 7 5 5 5 5 6 6 6 6 6 F F F F F F F F F F Replacement Candidates: 4, 2, 7, 6 Selected: 6

More Related