1 / 49

CSCI/CMPE 4334 Operating Systems Review: Exam 2

CSCI/CMPE 4334 Operating Systems Review: Exam 2. Review. Chapters 7 ~ 10 in your textbook Lecture slides In-class exercises (on the course website) Review slides. Review. 5 questions (100 points) + 1 bonus question (20 points) Question types Q/A. Time & Place & Event.

karlar
Download Presentation

CSCI/CMPE 4334 Operating Systems Review: Exam 2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CSCI/CMPE 4334 Operating Systems Review: Exam 2

  2. Review • Chapters 7 ~ 10 in your textbook • Lecture slides • In-class exercises (on the course website) • Review slides

  3. Review • 5 questions (100 points) + 1 bonus question (20 points) • Question types • Q/A

  4. Time & Place & Event • 2:35pm ~ 3:50am, April 22, Tuesday • ENGR 1.290 • Closed-book exam

  5. Resource Manager Resource Manager Scheduler Resource Manager Process Manager Program Process Abstract Computing Environment Process Description File Manager Process Mgr Protection Deadlock Synchronization Device Manager Memory Manager Devices Memory CPU Other H/W

  6. Chapter 7: Scheduling • Thread scheduling • Ready, running, and blocked states • Context switching • Mechanism to call scheduler • Voluntary call • yield function • Involuntary call • Interval timer • Scheduling methods (see lecture slides and exercises) • First-come, first served (FCFS) • Shorter jobs first (SJF) or Shortest job next (SJN) • Higher priority jobs first • Job with the closest deadline first

  7. Preemption or voluntary yield New Thread Ready List Scheduler CPU Done job job “Running” job “Ready” Resource Manager Allocate Request job job “Blocked” Resources Thread Scheduler Organization

  8. Context Switching Old Thread Descriptor CPU New Thread Descriptor

  9. Process Model and Metrics • P will be a set of processes, p0, p1, ..., pn-1 • S(pi) is the state of pi {running, ready, blocked} • τ(pi), the service time • The amount of time pi needs to be in the running state before it is completed • W (pi), the waiting time • The time pi spends in the ready state before its first transition to the running state • TTRnd(pi), turnaround time • The amount of time between the moment pi first enters the ready state and the moment the process exits the running state for the last time

  10. Everyday scheduling methods • First-come, first served (FCFS) • Shorter jobs first (SJF) • or Shortest job next (SJN) • Higher priority jobs first • Job with the closest deadline first

  11. Invoking the Scheduler • Need a mechanism to call the scheduler • Voluntary call • Process blocks itself • Calls the scheduler • Non-preemptive scheduling • Involuntary call • External force (interrupt) blocks the process • Calls the scheduler • Preemptive scheduling

  12. Chapter 8: Basic Synchronization • Critical sections • ensure that when one process is executing in its critical section, no other process is allowed to execute in its critical section, called mutual exclusion • Requirements for Critical-Section Solutions • Mutual Exclusion • Progress • Bounded Waiting

  13. Chapter 8: Basic Synchronization (cont'd) • Semaphore and its P() and V() functions • Definition • Usage • Problems • Shared Account Balance Problem • Bounded Buffer Problem • Readers-Writers Problem • Sleepy Barber Problem • Dining-Philosophers Problem

  14. The Critical-Section Problem – cont. • Structure of process Pi repeat entry section critical section exit section remainder section untilfalse;

  15. shared double balance; Code for p1Code for p2 . . . . . . balance = balance + amount; balance = balance - amount; . . . . . . balance+=amount balance-=amount balance Updating A Shared Variable

  16. Requirements for Critical-Section Solutions • Mutual Exclusion. • If process Pi is executing in its critical section, then no other processes can be executing in their critical sections. • Progress • If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely. • Bounded Waiting • A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.

  17. Requirements for Critical-Section Solutions • Mutual Exclusion. • If process Pi is executing in its critical section, then no other processes can be executing in their critical sections. • Progress • If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely. • Bounded Waiting • A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.

  18. Some Possible Solutions • Disable interrupts • Software solution – locks • Transactions • FORK(), JOIN(), and QUIT() [Chapter 2] • Terminate processes with QUIT() to synchronize • Create processes whenever critical section is complete • … something new …

  19. Dijkstra Semaphore • A semaphore, s, is a nonnegative integer variable that can only be changed or tested by these two indivisible (atomic) functions: V(s): [s = s + 1] P(s): [while(s == 0) {wait}; s = s - 1]

  20. Solving the Canonical Problem Proc_0() { proc_1() { while(TRUE) { while(TRUE { <compute section>; <compute section>; P(mutex);P(mutex); <critical section>; <critical section>; V(mutex);V(mutex); } } } } semaphore mutex = 1; fork(proc_0, 0); fork(proc_1, 0);

  21. Shared Account Balance Problem Proc_0() { proc_1() { . . . . . . /* Enter the CS */ /* Enter the CS */ P(mutex);P(mutex); balance += amount; balance -= amount; V(mutex);V(mutex); . . . . . . } } semaphore mutex = 1; fork(proc_0, 0); fork(proc_1, 0);

  22. Bounded-Buffer Empty Pool Producer Consumer Full Pool

  23. Readers-Writers Problem Writers Readers

  24. Sleepy Barber Problem • Barber can cut one person’s hair at a time • Other customers wait in a waiting room Entrance to Waiting Room (sliding door) Shop Exit Entrance to Barber’s Room (sliding door) Waiting Room

  25. Dining-Philosophers Problem while(TRUE) { think(); eat(); } • Shared data semaphore chopstick[5]; (=1 initially)

  26. Chapter 9: High-Level Synchronization • Simultaneous semaphore • Psimultaneous(S1, ...., Sn) • Event • wait() • Signal() • Monitor • What is condition? Its usage? • What are 3 functions of the condition?

  27. Chapter 9: High-Level Synchronization (cont'd) • Examples • Shared Balance • Readers & Writers • Synchronizing Traffic • Dining Philosophers • Interprocess Communication (IPC)

  28. Abstracting Semaphores • As we have seen, relatively simple problems, such as the dining philosophers problem, can be very difficult to solve • Look for abstractions to simplify solutions • AND synchronization • Events • Monitors • … there are others ...

  29. Simultaneous Semaphores • The orders of P operations on semaphores are critical • Otherwise deadlocks are possible • Simultaneous semaphores • Psimultaneous(S1, ...., Sn) • The process gets all the semaphores or none of them

  30. Dining Philosophers Problem philosopher(int i) { while(TRUE) { // Think // Eat Psimultaneous(fork[i], fork [(i+1) mod 5]); eat(); Vsimultaneous(fork[i], fork [(i+1) mod 5]); } } semaphore fork[5] = (1,1,1,1,1); fork(philosopher, 1, 0); fork(philosopher, 1, 1); fork(philosopher, 1, 2); fork(philosopher, 1, 3); fork(philosopher, 1, 4);

  31. Events • Exact definition is specific to each OS • A process can wait on an event until another process signals the event • Have event descriptor (“event control block”) • Active approach • Multiple processes can wait on an event • Exactly one process is unblocked when a signal occurs • A signal with no waiting process is ignored • May have a queue function that returns number of processes waiting on the event

  32. Monitors •  Construct ensures that only one process can be active at a time in the monitor – no need to code the synchronization constraint explicitly • High-level synchronization construct that allows the safe sharing of an abstract data type among concurrent processes. classmonitor { variable declarations semaphore mutex = 1; public P1 :(…) { P(mutex); <processing for P1> V(mutex); }; ........ }

  33. Monitors – cont. • To allow a process to wait within the monitor, a condition variable must be declared, as condition x, y; • Condition variable can only be used with the operations wait and signal. • The operation x.wait;means that the process invoking this operation is suspended until another process invokes x.signal; • The x.signal operation resumes exactly one suspended process. If no process is suspended, then the signal operation has no effect.

  34. Condition Variables • Essentially an event (as defined previously) • Occurs only inside a monitor • Operations to manipulate condition variable • wait: Suspend invoking process until another executes a signal • signal: Resume one process if any are suspended, otherwise do nothing • queue: Return TRUE if there is at least one process suspended on the condition variable

  35. Refined IPC Mechanism • OS manages the mailbox space • More secure message system Address Space for p0 Address Space for p1 Info to be shared Info copy send(… p1, …); receive(…); OS Interface Mailbox for p1 Message Message send function Message receive function

  36. Chapter 10: Deadlock • 4 necessary conditions of deadlock • Mutual exclusion • Hold and wait • No preemption • Circular wait • How to deal with deadlock in OS • Prevention • Avoidance • Recovery

  37. Chapter 10: Deadlock (cont'd) • Prevention • Ensure that at least one of the necessary conditions is false at all times • How to? • Hold and wait • Circular Wait • Avoidance • Safe state • Banker’s algorithm (see more examples in lecture slides) • Safety algorithm to detect a safe sequence of process execution • Resource-request algorithm to allocate more resources for a process • Detection and recovery • Detection algorithm to detect deadlock • Deadlock detection from graph • Reusable Resource Graphs (RRGs) • (CRGs) Consumable Resource Graphs

  38. Deadlock Characterization • Deadlock can arise if four conditions hold simultaneously • Mutual exclusion • Hold and wait: • No preemption • Circular wait

  39. Dealing with Deadlocks • Three ways • Prevention • place restrictions on resource requests to make deadlock impossible • Avoidance • plan ahead to avoid deadlock. • Recovery • Check for deadlock (periodically or sporadically) and recover from it • Manual intervention (the ad hoc approach) • Reboot the machine if it seems too slow

  40. Hold and Wait • Need to be sure a process does not hold one resource while requesting another • Approach 1: Force a process to request all resources it needs at one time • Approach 2: If a process needs to acquire a new resource, it must first release all resources it holds, then reacquire all it needs

  41. Circular Wait • Occurs when a set of n processes that hold units of a set of n different resources • Impose a total ordering of all resource types, and require that each process requests resources in an increasing order of enumeration • Semaphore example • semaphores A and B, initialized to 1 P0 P1 wait (A); wait(A) wait (B); wait(B)

  42. Allowing Preemption • Allow a process to time-out on a blocked request -- withdrawing the request if it fails • r = request resource • w = withdraw request • d = release or deallocate resource ru Si Sj No guarantee! wu dv ru Sk

  43. Avoidance • Define a model of system states, then choose a strategy that will guarantee that the system will not go to a deadlock state • Requires extra information, e.g., the maximum claim for each process • Allows resource manager to see the worst case that could happen, then to allow transitions based on that knowledge

  44. Banker’s Algorithm • Best known of avoidance strategies • Modeled after lending policies used by banks • Each new process entering system declares the maximum use of resources it may need. • When a process requests a resource it may have to wait (until system in a safe state). • When a process gets all its resources it must return them in a finite amount of time.

  45. Data Structures for the Banker’s Algorithm Let n = number of processes, and m = number of resources types. • Available • Vector of length m. If available [j] = k, there are k instances of resource type Rj available. • Max • n x m matrix. If Max [i,j] = k, then process Pi may request at most k instances of resource type Rj. • Allocation • n x m matrix. If Allocation[i,j] = k then Pi is currently allocated k instances of Rj. • Need • n x m matrix. If Need[i,j] = k, then Pi may need k more instances of Rj to complete its task. • Need [i,j] = Max[i,j] – Allocation [i,j]

  46. Safety Algorithm 1. Let Work and Finish be vectors of length m and n, respectively. Initialize: Work := Available Finish [i] = false for i - 1, 2, 3, …, n. 2. Find an i such that both: (a) Finish [i] = false (b) Needi Work If no such i exists, go to step 4. 3. Work := Work + AllocationiFinish[i] := truego to step 2. 4. If Finish [i] = true for all i, then the system is in a safe state.

  47. Resource-Request Algorithm for Process Pi Requesti = request vector for process Pi. If Requesti[j] = k then process Pi wants k instances of resource type Rj. 1. If Requesti Needigo to step 2. Otherwise, raise error condition, since process has exceeded its maximum claim. 2. If Requesti Available, go to step 3. Otherwise Pi must wait, since resources are not available. 3. Pretend to allocate requested resources to Pi by modifying the state as follows: Available := Available –Requesti; Allocationi:= Allocationi + Requesti; Needi:= Needi – Requesti;; • If safe  the resources are allocated to Pi. • If unsafe  Pi must wait, and the old resource-allocation state is restored

  48. Detection Algorithm 1. Let Work and Finish be vectors of length m and n, respectively Initialize: (a) Work := Available (b) For i = 1,2, …, n, if Allocationi 0, then Finish[i] := false;otherwise, Finish[i] := true. 2. Find an index i such that both: (a) Finish[i] = false (b) Requesti Work If no such i exists, go to step 4. 3. (a)Work := Work + Allocationi; (b) Finish[i] := true; go to step 2. 4. If Finish[i] = false, for some i, 1  i  n, then the system is in deadlock state. Moreover, if Finish[i] = false, then Pi is deadlocked. Algorithm requires an order of m x n2 operations to detect whether the system is in deadlocked state.

  49. Good Luck! Q/A

More Related