1 / 54

Operating System

Operating System. Exploits hardware resources one or more processors main memory, disk and other I/O devices Provides a set of services to system users program development, program execution, access to I/O devices, controlled access to files and other resources etc.

anoki
Download Presentation

Operating System

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Operating System • Exploits hardware resources • one or more processors • main memory, disk and other I/O devices • Provides a set of services to system users • program development, program execution, access to I/O devices, controlled access to files and other resources etc.

  2. Operating Systems:Internals and Design Principles, 6/EWilliam Stallings Chapter 1Computer System Overview

  3. Given Credits • Most of the lecture notes are based on the slides from the Textbook’s companion website: http://williamstallings.com/OS/OS6e.html • Some of the slides are from Dr. David Tarnoff in East Tennessee State University • I have modified them and added new slides

  4. Computer Components: Top-Level View

  5. Processor Registers • User-visible registers • Enable programmer to minimize main memory references by optimizing register use • Control and status registers • Used by processor to control operation of the processor • Used by privileged OS routines to control the execution of programs

  6. Control and Status Registers • Program counter (PC) • Contains the address of an instruction to be fetched • Instruction register (IR) • Contains the instruction most recently fetched • Program status word (PSW) • Condition codes • Interrupt enable/disable • Kernel/user mode

  7. Control and Status Registers • Condition codes or flags • Bits set by processor hardware as a result of operations • Can be accessed by a program but not altered • Example • Condition code bit set following the execution of arithmetic instruction: positive, negative, zero, or overflow

  8. Instruction Execution • Two steps • Processor reads (fetches) instructions from memory • Processor executes each instruction

  9. Basic Instruction Cycle

  10. Instruction Fetch and Execute • The processor fetches the instruction from memory • Program counter (PC) holds address of the instruction to be fetched next • PC is incremented after each fetch

  11. Instruction Register • Fetched instruction loaded into instruction register • An instruction contains bits that specify the action the processor is to take • Categories of actions: • Processor-memory, processor-I/O, data processing, control

  12. Characteristics of a Hypothetical Machine

  13. Example of Program Execution

  14. Interrupts • Interrupt the normal sequencing of the processor • Why do we need interrupts

  15. Classes of Interrupts

  16. Interrupts • Most I/O devices are slower than the processor • Without interrupts, processor has to pause to wait for device

  17. Program Flow of Control

  18. Program Flow of Control

  19. Interrupt Stage • Processor checks for interrupts • If interrupt • Suspend execution of program • Execute interrupt-handler routine

  20. Transfer of Control via Interrupts

  21. Instruction Cycle with Interrupts

  22. Simple Interrupt Processing

  23. Changes in Memory and Registers for an Interrupt

  24. Changes in Memory and Registers for an Interrupt

  25. Multiple Interrupts • What to do if another interrupt happens when we are handling one interrupt?

  26. Sequential Interrupt Processing

  27. Nested Interrupt Processing

  28. Multiprogramming • Processor has more than one program to execute • The sequence the programs are executed depend on their relative priority and whether they are waiting for I/O • After an interrupt handler completes, control may not return to the program that was executing at the time of the interrupt

  29. Input/Output Techniques • Programmed I/O • Interrupt driven – I/O • Direct Memory Access (DMA) • What are they & the ranking of their efficiencies

  30. Input/Output Techniques • Programmed I/O – poll and response • Interrupt driven – I/O module calls for CPU when needed • Direct Memory Access (DMA) – module has direct access to specified block of memory

  31. I/O Module Structure

  32. Programmed I/O – CPU has direct control over I/O • Processor requests operation with commands sent to I/O module • Control – telling a peripheral what to do • Test – used to check condition of I/O module or device • Read – obtains data from peripheral so processor can read it from the data bus • Write – sends data using the data bus to the peripheral • I/O module performs operation • When completed, I/O module updates its status registers • Sensing status – involves polling the I/O module's status registers

  33. Programmed I/O (continued) • I/O module does not inform CPU directly • CPU may wait or do something and come back later • Wastes CPU time because • CPU acts as a bridge for moving data between I/O module and main memory, i.e., every piece of data goes through CPU • CPU waits for I/O module to complete operation

  34. Interrupt Driven I/O • Overcomes CPU waiting • Requires interrupt service routine • No repeated CPU checking of device • I/O module interrupts when ready • Still requires CPU to go between for moving data between I/O module and main memory

  35. Interrupt-Driven I/O • Consumes a lot of processor time because every word read or written passes through the processor

  36. Direct Memory Access (DMA) • Impetus behind DMA –Interrupt driven and programmed I/O require active CPU intervention (all data must pass through CPU) • Transfer rate is limited by processor's ability to service the device • CPU is tied up managing I/O transfer

  37. DMA (continued) • Additional Module (hardware) on bus • DMA controller takes over bus from CPU for I/O • Waiting for a time when the processor doesn't need bus • Cycle stealing – seizing bus from CPU (more common)

  38. DMA Operation • CPU tells DMA controller: • whether it will be a read or write operation • the address of device to transfer data from or to • the starting address of memory block for the data transfer • the amount of data to be transferred • DMA performs transfer while CPU does other processing • DMA sends interrupt when completes

  39. Cycle Stealing • DMA controller takes over bus for a cycle • Transfer of one word of data • Not an interrupt to CPU operations • CPU suspended just before it accesses bus– i.e. before an operand or data fetch or a data write • Slows down CPU but not as much as CPU doing transfer

  40. Direct Memory Access • Transfers a block of data directly to or from memory • An interrupt is sent when the transfer is complete • Most efficient

  41. The Memory Hierarchy

  42. Going Down the Hierarchy • Decreasing cost per bit • Increasing capacity • Increasing access time • Decreasing frequency of access to the memory by the processor

  43. Cache Memory • Processor speed faster than memory access speed • Exploit the principle of locality with a small fast memory

  44. Cache and Main Memory

  45. Cache Principles • Contains copy of a portion of main memory • Processor first checks cache • If not found, block of memory read into cache • Because of locality of reference, likely future memory references are in that block

  46. Cache/Main-Memory Structure

  47. Cache Read Operation

  48. Cache Principles • Cache size • Small caches have significant impact on performance • Block size • The unit of data exchanged between cache and main memory • Larger block size more hits until probability of using newly fetched data becomes less than the probability of reusing data that have to be moved out of cache

  49. Cache Principles • Mapping function • Determines which cache location the block will occupy • Direct Mapped Cache, Fully Associative Cache, N-Way Set Associative Cache • Replacement algorithm • Chooses which block to replace • Least-recently-used (LRU) algorithm

  50. Cache Principles • Write policy • Dictates when the memory write operation takes place • Can occur every time the block is updated • Can occur when the block is replaced • Minimize write operations • Leave main memory in an obsolete state

More Related