1 / 61

Outline

Outline. Basic memory management Swapping Virtual memory Page replacement algorithms Modeling page replacement algorithms Design issues for paging systems Implementation issues Segmentation. When OS Involves Paging?. Process creation Process execution Page faults Process termination.

Download Presentation

Outline

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Outline • Basic memory management • Swapping • Virtual memory • Page replacement algorithms • Modeling page replacement algorithms • Design issues for paging systems • Implementation issues • Segmentation

  2. When OS Involves Paging? • Process creation • Process execution • Page faults • Process termination

  3. Paging in Process Creation • Determine (initial) size of program and data • Assign appropriate page frames • Create page table • Process is running  page table in memory • Create swap area • For the pages swapped out • Record info about page table and swap area in process table

  4. Paging During Process Running • When a process is scheduled for execution • Reset MMU • Flush TLB (translation Lookaside Buffer) • Copy or point to new process’ page table • Bring some or all of new process’ pages into memory • Reduce the number of page faults

  5. Paging When Page Faults • Read registers to identify virtual address causing the page fault • Locate the page needed on disk • Find available page frame • Evict an old page if necessary • Read in the page • Execute the faulting instruction again

  6. Paging When Process Exits • Release page table • Release pages and swap area on disk • If some pages are shared by other processes, keep them.

  7. Swap Area • Swap area: reserved for processes’ pages • A chunk of pages reserved on disk, same size of # of pages of the process • Each process has its own swap area, recorded in process table • Initialize swap area before a process runs • Copy entire process image, or • Load the process in memory and let it be paged out as needed

  8. Swap Area for Growing Processes • Process may grow after starting • Data area and stack may grow • Reserve separate swap areas for text (program), data, and stack • Reserve nothing in advance • Allocate disk space when pages swapped out • De-allocate when page swapped in again • Have to keep track of pages on disk • A disk address per page, costly!

  9. Comparison of Two Methods Paging to a static swap area Backing up pages dynamically

  10. Outline • Basic memory management • Swapping • Virtual memory • Page replacement algorithms • Modeling page replacement algorithms • Design issues for paging systems • Implementation issues • Segmentation

  11. A Motivating Example • Many tables when compiling a program • Symbol table: names and attributes of variables • Constant table: integer/floating-point constants • Parse tree: syntactic analysis • Stack: for procedure calls within the compiler • Each table needs contiguous chunks of virtual address space. But tables grow/shrink as compilation proceeds. • How to manage space for these tables?

  12. With One-Dimensional Address Virtual address space • Take space from tables with an excess of room • Tedious work • Free programmers from managing expanding and contracting tables • Segments: many completely independent address spaces

  13. Segments • Segment: a two dimensional memory • Each segment has a linear sequence of address (0 to some maximum) • Different segments may have different lengths • Segment lengths may change during execution • Different segments can grow/shrink independently • Address: segment number + address within the segment (offset)

  14. Multiple Segments in A Process • Segments are logical entities • Programmers are aware of them. • A segment may contain a procedure, or an array, but not a mixture of different types. • Facilitate separate protection.

  15. Paging Vs. Segmentation

  16. Implementing Pure Segmentation • External fragmentation Time 

  17. Segmentation With Paging: MULTICS • For large segments, only the “working set” should be kept in memory • Paging segments: segment has own page table • Each program has a segment table • One entry (descriptor) per segment • Segment table is itself a segment • If (part of) a segment is in memory, its page table must be in memory • Address: segment # + virtual page # + offset • Segment #  page table • Page table + virtual page #  page frame address • Page frame address + offset  physical address

  18. The MULTICS virtual memory Virtual Address Page frame Page table for segment 2 Segment table Segment descriptor contains the memory address of the page table. Page frame Page table for segment 0

  19. Summary • Fixed partitions • Multiple queues Vs. single queue • Degree of multiprogramming • Relocation and protection • Swapping • Virtual memory Vs. physical memory • Bitmap and linked list • Holes

  20. Summary (Cont.) • Virtual memory • Pages Vs. page frames • Page tables • Page replacement algorithms (aging and WSClock) • Modeling paging systems • Stack algorithms • Predict page faults using distance string

  21. Summary (Cont.) • Design issues • Local Vs. global allocations • Load control to reduce thrashing • Shared pages • Implementation issues • Page fault handling and swap area • Segmentation • Each segment has its own address space • Advantages • Pure segmentation and segmentation with paging

  22. CMPT 300: Operating System Chapter 5Input/Output

  23. Outline • Principles of I/O hardware • Principles of I/O software • I/O software layers • Disks • Clocks

  24. Block and Character Devices • Block devices: store info in fixed-size blocks • Examples: disks • Each block has its own address • Each block can be read/written independently • Data are read/write in the units of block • Character devices: no block structure, deliver/accept a stream of characters • Examples: keyboard, printers, network, mice • Not addressable, no seek operation • Devices not in the classification: clock

  25. Huge Range in Speeds • Many orders of magnitude in data rates • Keyboard: 10 bytes/sec • Mouse: 100 bytes/sec • Laser printer: 100Kb/sec • IDE disk: 5Mb/sec • PCI bus: 528Mb/sec • Challenge: how to design a general structure to control various I/O devices? • Multi-bus system

  26. Structure of I/O Units • A mechanical component: the device itself • Disk: plates, heads, motors, arm, etc. • Monitor: tube, screen, etc. • An electronic component: device controller, adaptor • Disk: issuing commands to mechanical components, assembling, checking and transferring data • Monitor: read characters to be displayed and generate electrical signals to modulate the CRT beam

  27. Mechanical / Electronic Components Mechanical components Floppy disk Hard disk Monitor Keyboard Floppy disk controller Video controller Keyboard controller Hard disk controller CPU Memory Electronic components Bus

  28. Device Controller • Registers in I/O controllers • CPU writes commands into registers • CPU reads states of devices from registers • Data buffer for transferring data • How CPU distinguishes different registers and data buffers? • I/O port number • Memory-mapped I/O

  29. I/O Port Number • Each control register is assigned a unique I/O port number (8-/16-bit integer) • Instruction IN and OUT • IN REG, PORT • OUT PORT, REG • Separated address spaces • IN R0, 4 and MOV R0, 4are completely different. Memory I/O ports

  30. Memory-Mapped I/O • Map all control registers into memory space • Usually at the top of the address space • Each register is assigned a unique memory address, e.g., 0xFF10 • Hybrid scheme (used in Pentium) I/O buffered data I/O ports Memory-mapped Hybrid Memory

  31. Pros & Cons of Memory-Mapped I/O • Advantages • Easier for programming • Easier to protect and share I/O devices • Save time of accessing control registers • Disadvantages • Caching a control register is disastrous • Disable caching for selected pages • I/O devices cannot see the memory addresses with separate buses for memory and I/O devices (see next slide)

  32. Single-bus All addresses go here CPU reads/writes of memory go over this high-bandwidth bus Dual-bus

  33. Interrupts 1. Device finishes a work Interrupt controller 3. CPU acks interrupt Disk Keyboard CPU Clock 2. Controller issues interrupt Printer Bus

  34. Interrupt Processing • I/O devices raise interrupt by asserting a signal on a bus line assigned • Multiple interrupts  the one with high priority goes first • Interrupt controller interrupts CPU • Put device # on address lines • Device #  check interrupt vector table for interrupt handler (a program) • Enable interrupts shortly after the handler starts

  35. Direct Memory Access (DMA) • Request data from I/O without DMA • Device controller reads data from device • It interrupts CPU when a byte/block of data available • CPU reads controller’s buffer into main memory • Too many interruptions, expensive • DMA: direct memory access • A DMA controller with registers read/written by CPU • CPU programs the DMA: what to transfer where • Source, destination and size • DMA interrupts CPU only after all the data are transferred.

  36. Operations of DMA DMA controller Disk controller Main memory Drive 1. CPU programs the DMA and controller Buffer CPU Address Count 4. Ack Control 2. DMA requires transfer to memory Interrupt when done 3. Data transferred Bus

  37. Transfer Modes • Word-at-a-time (cycle stealing) • DMA controller acquires the bus, transfer one word, and releases the bus • CPU waits for bus if data is transferring • Cycle stealing: steal an occasional bus cycle from CPU once in a while • Burst mode • DMA holds the bus until a series of transfers complete • More efficient since acquiring bus takes time • Block the CPU from using bus for a substantial time

  38. Outline • Principles of I/O hardware • Principles of I/O software • I/O software layers • Disks • Clocks

  39. Issues of The I/O Software • Device independence • Uniform naming • Error handling • Buffering • Others

  40. How to Perform I/O? • Programmed I/O (Polling/Busy Waiting) • Acquire I/O device (e.g., printer) • Blocked if it is being used by another process • Copy the buffer from use space to kernel space • Step 1: keep checking the state until it is ready. • Step 2: send one character to printer. Go to Step 1. • Release the I/O. • The CPU do all the work • Simple • Waste a lot of CPU time on busy waiting

  41. An Example Q: Why does OS copy the buffer from user space to kernel space?

  42. Interrupt-Driven I/O Print system call copy_from_user(buffer, p, count); enable_interrupts(); while (*printer_status_reg!=READY); *printer_data_register=p[0]; scheduler(); Interrupt service procedure if (count==0){ unblock_user(); } else { *printer_data_register=p[I]; count--; i++; } acknowledge_interrupt(); return_from_interrupt(); Interrupt occurs on every character!

  43. I/O Using DMA • Too many interrupts in interrupt-driven I/O • DMA reduces # of interrupts from 1/char to 1/buffer printed Print system call copy_from_user(buffer, p, count); set_up_DMA_controller(); scheduler(); Interrupt service procedure acknowledge_interrupt(); unblock_user(); return_from_interrupt();

  44. Outline • Principles of I/O hardware • Principles of I/O software • I/O software layers • Disks • Clocks

  45. Layers Overview

  46. Interrupt Handlers • Hide I/O interrupts deep in OS • Device driver starts I/O and blocks (e.g., down a mutex) • Interrupt wakes up driver • Process an interrupt • Save registers ( which to where?) • Set up context (TLB, MMU, page table) • Run the handler (usually the handler will be blocked) • Choose a process to run next • Load the context for the newly selected process • Run the process • Take considerable number of CPU instructions

  47. Device Drivers • Device-specific code for controlling I/O devices • Written by manufacture, delivered along with device • One driver for one (class) device(s) • Position: part of OS kernel, below the rest of OS • Interfaces for rest of OS • Block device and character device have different interfaces

  48. Logical Position of Device Drivers User program User space Rest of the OS Kernel space Printer driver Hardware Printer controller printer Devices

  49. How to Install a Driver? • Re-compile and re-link the kernel • Drivers and OS are in a single binary program • UNIX systems • Often run by computer centers, devices rarely change • Dynamically loaded during OS initialization • Windows system • Devices often change • Difficult to obtain source code • Users don’t know how to compile OS

  50. Functions of Device Drivers • Accept abstract read/write requests • Error checking, parameter converting • Check status, initialize device, if necessary • Issue a sequence of commands • May block and wait for interrupt • Check error, return data • Other issues: re-entrant, up-call, etc.

More Related