1 / 95

Dr. Dale Parson ’ s Study Guide for the CSC343 midterm exam on Nov 3, Operating Systems, Fall 2015

Dr. Dale Parson ’ s Study Guide for the CSC343 midterm exam on Nov 3, Operating Systems, Fall 2015. Four Components of a Computer System. Computer System Organization. Computer-system operation One or more CPUs, device controllers connect through common bus providing access to shared memory

lancet
Download Presentation

Dr. Dale Parson ’ s Study Guide for the CSC343 midterm exam on Nov 3, Operating Systems, Fall 2015

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Dr. Dale Parson’s Study Guide for the CSC343 midterm exam on Nov 3, Operating Systems, Fall 2015

  2. Four Components of a Computer System

  3. Computer System Organization • Computer-system operation • One or more CPUs, device controllers connect through common bus providing access to shared memory • Concurrent execution of CPUs and devices competing for memory cycles

  4. Common Functions of Interrupts • Interrupt transfers control to the interrupt service routine generally, through the interruptvector, which contains the addresses of all the service routines • Interrupt architecture must save the address of the interrupted instruction • Incoming interrupts are disabled while another interrupt is being processed to prevent a lost interrupt • A trap is a software-generated interrupt caused either by an error or a user request • An operating system is interrupt driven

  5. I/O Structure • After I/O starts, control returns to user program only upon I/O completion • Wait instruction idles the CPU until the next interrupt • Wait loop (contention for memory access) • At most one I/O request is outstanding at a time, no simultaneous I/O processing • After I/O starts, control returns to user program without waiting for I/O completion • System call – request to the operating system to allow user to wait for I/O completion • Device-status table contains entry for each I/O device indicating its type, address, and state • Operating system indexes into I/O device table to determine device status and to modify table entry to include interrupt

  6. Direct Memory Access Structure • Used for high-speed I/O devices able to transmit information at close to memory speeds • Device controller transfers blocks of data from buffer storage directly to main memory without CPU intervention • Only one interrupt is generated per block, rather than the one interrupt per byte

  7. How a Modern Computer Works A von Neumann architecture

  8. Storage-Device Hierarchy

  9. Multiprocessors • Review Figures 1 and 3 at: • http://faculty.kutztown.edu/parson/spring2012/MultiprocessorJanuary2012Parson.pdf

  10. Operating System Structure • Multiprogramming needed for efficiency • Single user cannot keep CPU and I/O devices busy at all times • Multiprogramming organizes jobs (code and data) so CPU always has one to execute • A subset of total jobs in system is kept in memory • One job selected and run via job scheduling • When it has to wait (for I/O for example), OS switches to another job • Timesharing (multitasking) is logical extension in which CPU switches jobs so frequently that users can interact with each job while it is running, creating interactive computing • Response time should be < 1 second • Each user has at least one program executing in memory process • If several jobs ready to run at the same time  CPU scheduling • If processes don’t fit in memory, swapping moves them in and out to run • Virtual memory allows execution of processes not completely in memory

  11. Operating-System Operations • Interrupt driven by hardware • Software error or request creates exception or trap • Division by zero, request for operating system service • Other process problems include infinite loop, processes modifying each other or the operating system • Dual-mode operation allows OS to protect itself and other system components • User mode and kernel mode • Mode bit provided by hardware • Provides ability to distinguish when system is running user code or kernel code • Some instructions designated as privileged, only executable in kernel mode • System call changes mode to kernel, return from call resets it to user

  12. Transition from User to Kernel Mode • Timer to prevent infinite loop / process hogging resources • Set interrupt after specific period • Operating system decrements counter • When counter zero generate an interrupt • Set up before scheduling process to regain control or terminate program that exceeds allotted time

  13. Migration of Integer A from Disk to Register • Multitasking environments must be careful to use most recent value, no matter where it is stored in the storage hierarchy • Multiprocessor environment must provide cache coherency in hardware such that all CPUs have the most recent value in their cache • Distributed environment situation even more complex • Several copies of a datum can exist • Various solutions covered in Chapter 17

  14. CH2: A View of Operating System Services

  15. Bourne Shell Command Interpreter

  16. GUIs

  17. System Calls • Programming interface to the services provided by the OS • Typically written in a high-level language (C or C++) • Mostly accessed by programs via a high-level Application Program Interface (API) rather than direct system call use • Three most common APIs are Win32 API for Windows, POSIX API for POSIX-based systems (including virtually all versions of UNIX, Linux, and Mac OS X), and Java API for the Java virtual machine (JVM) • Why use APIs rather than system calls? (Note that the system-call names used throughout this text are generic)

  18. Example of System Calls • System call sequence to copy the contents of one file to another file

  19. System Call Implementation • Typically, a number associated with each system call • System-call interface maintains a table indexed according to these numbers • The system call interface invokes intended system call in OS kernel and returns status of the system call and any return values • The caller need know nothing about how the system call is implemented • Just needs to obey API and understand what OS will do as a result call • Most details of OS interface hidden from programmer by API • Managed by run-time support library (set of functions built into libraries included with compiler)

  20. API – System Call – OS Relationship

  21. Types of System Calls • Process control • create process, terminate process • end, abort • load, execute • get process attributes, set process attributes • wait for time • wait event, signal event • allocate and free memory • Dump memory if error • Debugger for determining bugs, single step execution • Locks for managing access to shared data between processes

  22. Types of System Calls • File management • create file, delete file • open, close file • read, write, reposition • get and set file attributes • Device management • request device, release device • read, write, reposition • get device attributes, set device attributes • logically attach or detach devices

  23. Types of System Calls (Cont.) • Information maintenance • get time or date, set time or date • get system data, set system data • get and set process, file, or device attributes • Communications • create, delete communication connection • send, receive messages if message passing model to host name or process name • From client to server • Shared-memory model create and gain access to memory regions • transfer status information • attach and detach remote devices

  24. Types of System Calls (Cont.) • Protection • Control access to resources • Get and set permissions • Allow and deny user access

  25. CHAPTER 3: Process Concept • An operating system executes a variety of programs: • Batch system – jobs • Time-shared systems – user programs or tasks • Textbook uses the terms job and process almost interchangeably • Process – a program in execution; process execution must progress in sequential fashion • A process includes: • program counter • stack • data section • heap (dynamically allocated memory) • PARSON NOTE: In a multithreaded process each thread has its own program counter (a.k.a. instruction pointer), and other CPU registers such as Stack Pointer and Frame Pointer.

  26. Process in (Virtual) Memory

  27. Process (or Thread!) State • As a process (or thread) executes, it changes state • new: The process is being created • running: Instructions are being executed • waiting: The process is waiting for some event to occur • ready: The process is waiting to be assigned to a processor • terminated: The process has finished execution

  28. KNOW fcfs, sjf and rr schedulers.

  29. Process Control Block (PCB) Information associated with each process • Process state • Program counter (PER THREAD) • CPU registers (PER THREAD) • CPU scheduling information (PER THREAD) • Memory-management information • Accounting information (PER PROCESS & THREAD) • I/O status information (PER PROCESS & THREAD)

  30. CPU Switch From Process to Process

  31. Process Scheduling Queues • Job queue – set of all processes in the system • Ready queue – set of all processes residing in main memory, ready and waiting to execute • Device queues – set of processes waiting for an I/O device • Processes migrate among the various queues

  32. Schedulers • Long-term scheduler(or job scheduler) – selects which processes should be brought into the ready queue • Short-term scheduler(or CPU scheduler) – selects which process (thread!) should be executed next and allocates CPU

  33. Representation of Process Scheduling • Queueing diagram represents queues, resources, flows

  34. Addition of Medium Term Scheduling

  35. Schedulers (Cont) • Short-term scheduler is invoked very frequently (milliseconds)  (must be fast) • Long-term scheduler is invoked very infrequently (seconds, minutes)  (may be slow) • The long-term scheduler controls the degree of multiprogramming • Processes can be described as either: • I/O-bound process– spends more time doing I/O than computations, many short CPU bursts • CPU-bound process– spends more time doing computations; few very long CPU bursts

  36. Context Switch • When CPU switches to another process, the system must save the state of the old process and load the saved state for the new process via a context switch • Context of a process represented in the PCB • Context-switch time is overhead; the system does no useful work while switching • Time dependent on hardware support

  37. Process Creation • Parent process create children processes, which, in turn create other processes, forming a tree of processes • Generally, process identified and managed via a process identifier (pid) • Resource sharing • Parent and children share all resources • Children share subset of parent’s resources • Parent and child share no resources • Execution • Parent and children execute concurrently • Parent waits until children terminate

  38. Process Termination • Process executes last statement and asks the operating system to delete it (exit) • Output data from child to parent (via wait) • Process’ resources are deallocated by operating system • Parent may terminate execution of children processes (abort) • Child has exceeded allocated resources • Task assigned to child is no longer required • If parent is exiting • Some operating system do not allow child to continue if its parent terminates • All children terminated - cascading termination

  39. Interprocess Communication • Processes within a system may be independent or cooperating • Cooperating process can affect or be affected by other processes, including sharing data • Reasons for cooperating processes: • Information sharing • Computation speedup • Modularity • Convenience • Cooperating processes need interprocess communication (IPC) • Two models of IPC • Shared memory • Message passing

  40. CH4: Single and Multithreaded Processes

  41. Benefits • Responsiveness • Resource Sharing • Economy • Scalability

  42. Multithreaded Server Architecture

  43. Concurrent Execution on a Single-core System

  44. Parallel Execution on a Multicore System

  45. Many-to-One Model(BEFORE THREAD-SAFE KERNELS!)

  46. One-to-OneModel (THREAD SAFE!)

  47. Many-to-Many Model (NOT THERE YET)

  48. CH5: Race Condition & Deadlocks • Results of execution and possible bugs depend on relative timing of threads interacting in a critical section. • A critical section comprises regions of code that interact over a shared, non-atomic data set.

  49. Solution to Critical-Section Problem 1. Mutual Exclusion - If process Pi is executing in its critical section, then no other processes can be executing in their critical sections 2. Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely 3. Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted • Assume that each process executes at a nonzero speed • No assumption concerning relative speed of the N processes

  50. Critical-Section Handling in OS Two approaches depending on if kernel is preemptive or non- preemptive • Preemptive– allows preemption of process when running in kernel mode • Non-preemptive – runs until exits kernel mode, blocks, or voluntarily yields CPU • Essentially free of race conditions in kernel mode

More Related