1 / 65

Critical Regions

Critical Regions. § 7.6. Although semaphores provide a convenient and effective mechanism for process synchronization, their incorrect use can still result in timing errors that are difficult to detect. Examples:. A process omits the wait(mutex) , or the signal(mutex) , or both

bruner
Download Presentation

Critical Regions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Critical Regions § 7.6 • Although semaphores provide a convenient and effective mechanism for process synchronization, their incorrect use can still result in timing errors that are difficult to detect. • Examples: A process omits the wait(mutex), or the signal(mutex), or both Either mutual exclusion is violated or a deadlock will occur. wait(mutex); critical section wait(mutex); A deadlock will occur. signal(mutex); critical section wait(mutex); Several processes may be executing in their critical section simultaneously.

  2. Critical Regions § 7.6 • Although semaphores provide a convenient and effective mechanism for process synchronization, their incorrect use can still result in timing errors that are difficult to detect. • Examples: Multiple Choices Question: ( ) What kind of problem can happen if more than one thread work on a semaphore in the following sequence?signal(mutex); criticalSection(); wait(mutex);(a) starvation (b) deadlock (c) blocking (d) not synchronizing (e) violate mutual exclusion A process omits the wait(mutex), or the signal(mutex), or both Either mutual exclusion is violated or a deadlock will occur. wait(mutex); critical section wait(mutex); A deadlock will occur. signal(mutex); critical section wait(mutex); Several processes may be executing in their critical section simultaneously. Answer: e

  3. Critical Region • High-level synchronization construct • A shared variable v of type T, is declared as: v:shared T; • Variable v accessed only inside statement region v when (B) S;where B is a boolean expression. • While statementS is being executed, no other process can access variable v.

  4. Critical Region • Regions referring to the same shared variable exclude each other in time. • When a process tries to execute the region statement, the Boolean expression B is evaluated. If B is true, statement S is executed. If it is false, the process is delayed until B becomes true and no other process is in the region associated with v.

  5. Critical Region • Example: two statementsregion v when (true) S1; region v when (true) S2;are executed concurrently in distinct sequential processes, the result will be equivalent to the sequential execution “S1 followed by S2” or “S2 followed by S1.”

  6. Critical Region • The critical-region construct guards against certain simple errors associated with the semaphore solution to the critical-section problem that may be made by a programmer. • However, it does not necessarily eliminate all synchronization errors; rather, it reduces their number. • Can be used to solve certain general synchronization problems.

  7. Example – bounded buffer • Shared data:struct buffer { item pool[n]; int count, in, out; } • Producer process inserts nextp into the shared buffer region buffer when( count < n) { pool[in] = nextp; in = (in+1) % n; count++; }

  8. Example – bounded buffer • Consumer process removes an item from the shared buffer and puts it in nextcregion buffer when (count > 0) { nextc = pool[out]; out = (out+1) % n; count--; }

  9. Implement the conditional critical region (Skip)

  10. Monitors § 7.7 • High-level synchronization construct that allows the safe sharing of an abstract data type among concurrent processes. • A monitor presents a set of programmer-defined operations that are provided mutual exclusion within the monitor. • A monitor type consists of declarations of variables whose values define the state of an instance of the type, as well as the bodies of procedures or functions that implement operations on the type.

  11. Monitor Syntax monitor monitor-name { shared variable declarations procedure body P1 (…) { . . . } procedure body P2 (…) { . . . } procedure body Pn (…) { . . . } { initialization code } }

  12. Condition Variables • Encapsulation: limits access to the local variables by only the local procedures. • The monitor construct prohibits concurrent access to all procedures defined within the monitor. • Only one process may be active within the monitor at a time. • Synchronization is built into the monitor type, the programmer does not need to code it explicitly. • Special operations waitand signal can be invoked on the variables of type condition.condition x, y; • A process that invokes x.wait is suspended until another process invokes x.signal

  13. Condition Variables • Encapsulation: limits access to the local variables by only the local procedures. • The monitor construct prohibits concurrent access to all procedures defined within the monitor. • Only one process may be active within the monitor at a time. • Synchronization is built into the monitor type, the programmer does not need to code it explicitly. • Special operations waitand signal can be invoked on the variables of type condition.condition x, y; • A process that invokes x.wait is suspended until another process invokes x.signal True-False Question: ( ) Although there maybe several processes inside the monitor at the same time, there can only be one process with the state of active at a time. Answer: ○

  14. Schematic View of a Monitor

  15. Condition Variables • Condition variable can only be used with the operations wait and signal. • The operation x.wait();means that the process invoking this operation is suspended until another process invokes x.signal(); • The x.signal operation resumes exactly one suspended process. If no process is suspended, then the signal operation has no effect. Contrast this operation with the signal operation associated with semaphores, which always affects the state of the semaphore.

  16. Monitor with condition variables

  17. Two Possibilities More reasonable since P was already executing in the monitor. • When x.signal() operation is invoked by a processP, there is a suspended process Q associated with condition x. • P either waits until Q leaves the monitor, or waits for another condition. • Q either waits until P leaves the monitor, or waits for another condition. • Concurrent C: when process P executes the signal operation, process Q is immediately resumed. Advocated by Hoare “logical” condition for which Q was waiting may no longer hold by the time Q is resumed.

  18. Solution to Dining Philosophers monitor dp { enum {thinking, hungry, eating} state[5]; condition self[5]; void pickup(int i) // following slides void putdown(int i) // following slides void test(int i) // following slides void init() { for (int i = 0; i < 5; i++) state[i] = thinking; } } philosopher i can delay herself when she is hungry, but is unable to obtain the chopsticks she needs.

  19. pickUp() Procedure Each philosopher, before starting to eat, must invoke the operation pickup(). void pickup(int i) { state[i] = hungry; test[i]; if (state[i] != eating) self[i].wait(); } void putdown(int i) { state[i] = thinking; // test left and right neighbors test((i+4) % 5); test((i+1) % 5); } May result in the suspension of the philosopher thread.

  20. test() Procedure void test(int i) { if ( (state[(i + 4) % 5] != eating) && (state[i] == hungry) && (state[(i + 1) % 5] != eating)) { state[i] = eating; self[i].signal(); } } Philosopher i can set the variable state[i] = eating only if her two neighbors are not eating. Release self[i] so that the thread can proceed.

  21. Solution to Dining Philosophers • Philosopher i must invoke the operations pickup and putdown in the following sequence: dp.pickUp(i); ... eat ... dp.putDown(i); • This solution ensures that no two neighbors are eating simultaneously, and no deadlocks will occur. • However, it is possible for a philosopher to starve to death.

  22. Implement monitorusing semaphore • Variables semaphore mutex; // (initially = 1) semaphore next; // (initially = 0) int next_count = 0; • Each external procedure F will be replaced by wait(mutex); … body of F; … if (next_count > 0) signal(next) else signal(mutex); • Mutual exclusion within a monitor is ensured.

  23. Implement monitorusing semaphore • For each condition variable x, we have: semaphore x_sem; // (initially = 0) int x_count = 0; • The operation x.wait can be implemented as: x_count++; if (next_count > 0) signal(next); else signal(mutex); wait(x_sem); x_count--;

  24. Implement monitorusing semaphore • The operation x.signal can be implemented as: if (x_count > 0) { next_count++; signal(x_sem); wait(next); next_count--; }

  25. Process-resumption order • If several processes are suspended on condition x, and an x.signal operation is executed by some process, then how to determine which of the suspended processes should be resumed next? • Except FCFS, the conditional-wait construct canbe used: x.wait(c)where c is an integer expression that is evaluated when the wait operation is executed. • The value of c, called priority number, is then stored with the name of the process that is suspended. When x.signal is executed, the process with the smallest associated priority number is resumed next.

  26. Example Each process, when requesting an allocation of its resources, specifies the maximum time it plans to use the resource. • Monitor controlling the allocation of a single resource among competing processes. • monitor ResourceAllocation { boolean busy; condition x; void acquire(int time) { if (busy) x.wait(time); busy = true; } void release() { busy = false; x.signal(); } void init() { busy = false; }} The monitor allocates the resource to that process that has the shortest time-allocation request.

  27. Example • A process that needs to access the resource must following the sequence: R.acquire(t); ... access the resource; ... R.release(); • Unfortunately, the monitor concept cannot guarantee that the sequence will be followed. access resource without permission process not releasing resource process request same resource before release it process releasing not requested resource

  28. Access-control problem • Check two conditions to establish correctness of system: • User processes must always make their calls on the monitor in a correct sequence. • Must ensure that an uncooperative process does not ignore the mutual-exclusion gateway provided by the monitor, and try to access the shared resource directly, without using the access protocols. Can be solved only by additional mechanisms. (chap18) These checkings are not reasonable for large or dynamic system.

  29. OS Synchronization § 7.8 Solaris 2: • Implements a variety of locks to support multitasking, multithreading (including real-time threads), and multiprocessing. • Synchronization in Solaris 2 Provides: - adaptive mutex - condition variables - semaphores - reader-writer locks

  30. Adaptive Mutex • An adaptive mutex protects access to every critical data item. It starts as a standard semaphore implemented as a spinlock. • If the data are locked (in use), the adaptive mutex does one of two things: • If the lock is held by a thread that is currently running on another CPU, the thread spins while waiting for the lock to become available. • If the thread holding the lock is not currently in run state, the thread blocks and go to sleep until the lock being released. The thread holding the data is likely to end soon. Put to sleep for avoiding the spinning when the lock will not be freed reasonably quickly.

  31. Adaptive Mutex Multiple Choices Question: ( ) Different operations may be adopted by the adaptive mutex mechanism when a thread is requesting a locked data. The decision is made by the status of(a) located memories (b) relative CPU speed (c) the thread holding the lock (d) the type of monitor entries • An adaptive mutex protects access to every critical data item. It starts as a standard semaphore implemented as a spinlock. • If the data are locked (in use), the adaptive mutex does one of two things: • If the lock is held by a thread that is currently running on another CPU, the thread spins while waiting for the lock to become available. • If the thread holding the lock is not currently in run state, the thread blocks and go to sleep until the lock being released. Put to sleep for avoiding the spinning when the lcok will not be freed reasonably quickly. Answer: C

  32. Adaptive Mutex • Adaptive mutex only protect those data that are accessed by short code segments where a lock will be held for less than a few hundred instructions. • For longer code segments, condition variables and semaphores are used. • If the desired lock is already held, the thread issues a wait and sleep. • The cost of putting a thread to sleep and waking it is less than the cost of wasting several hundred instructions waiting in a spinlock. If the code segment is longer than that, spin waiting will be exceedingly inefficient.

  33. Readers-Writers Lock • The readers-writers locks are used to protect data that are accessed frequently, but usually only in a read-only manner. • In these circumstances, readers-writers locks are more efficient than semaphores. • Expensive to implement, so again they are used on only long sections of code. THINK

  34. OS Synchronization Windows 2000: • Uses interrupt masks to protect access to global resources on uniprocessor systems. • Uses spinlocks on multiprocessor systems. • Also provides dispatcher objects which may act as either mutexes and semaphores. • Dispatcher objects may also provide events. An event acts much like a condition variable.

  35. Atomic Transactions § 7.9 • Need to make sure that a CS forms a single logical unit of work that either is performed in its entirety or is not performed at all. • A collection of instructions (or operations) that performs a single logical function is called a transaction. • A major issue in processing transactions is the preservation of atomicity despite the possibility of failures within the computer system.

  36. Commit & Abort • From our point of view, a transaction is simply a sequence of read and write operations, terminated by commit or abort operation. • commit: signifies that the transaction has terminated successfully. • abort: the transaction had to cease its normal execution due to some logical error.

  37. Roll Back • An aborted transaction must have no effect on the state of the data that it has already modified, so that the atomicity property is ensured. • Thus, the state of the data accessed by an aborted transaction must be restored to what it was just before the transaction started executing --- rolled back.

  38. Device Properties (Skip) • To determine how the system should ensure atomicity, we need first to identify the properties of devices used for storing the various data accessed by the transactions. • Volatile StorageInformation residing involatile storage does not usually survive system crashed. • Nonvolatile StorageInformation residing involatile storage usually survive system crashed. • Stable StorageInformation residing is never lost.

  39. Mechanisms for ensuringTransaction Atomicity • Log-Based Recovery • Checkpoints • Concurrent Atomic Transactions

  40. Log-Based Recovery § 7.9.2 • Record informatin describing all the modifications made by the transaction to the various data it accessed. • Write-ahead logging:Each log record describes a single operation of a transaction write.Fields: • Transaction Name • Data Item Name • Old Value • New Value

  41. Log-Based Recovery • Before a transaction Ti starts its execution, record <Tistarts> is written to the log. • During its execution, any write operation by Ti is preceded by the writing of the appropriate new record to the log. • When Ti commits, the record < Ticommits> is written to the log.

  42. Log-Based Recovery • Performance penalty ... two physical writes are required for every logical write requested. • More storage is needed: for the data and the log. • The recovery algorithm uses two procedures: • undo(Ti) • redo(Ti)

  43. Log-Based Recovery • If a transaction Ti aborts → restore the state of the data that it has updated by executing undo(Ti). • If a system failure occurs, must consulting the log to determine the proper operation: • The log contains < Tistarts> record, but does not contain < Ticommits> record → Transaction Ti needs to be undone. • The log contains both the < Tistarts> and the < Ticommits> records → Transaction Ti needs to be redone. • Drawbacks ...

  44. Log-Based Recovery • Drawbacks: • The search process takes time. • Most of the transactions need to be redone already updated the data. Redoing the modificaiton takes longer. • To reduce the overhead ... use “Checkpoints.”

  45. Checkpoints § 7.9.3 • In addition to the write-ahead log, the system periodically performs checkpoints that require: • Output all log records currently residing in main memory onto stable storage. • Output all modified data residing in main memory to the stable storage. • Output a log record <checkpoint> onto stable storage. allows the system to streamline its recovery procedure

  46. T Checkpoints • After failure occur, the recovery routine examines the log to determine the most recent transaction Ti that started before the most recent chekckpoint. • The redo and undo need to be applied to only Ti and all Tj that started execution after Ti: • For all Tk in T such that record <Tk commits> appears in the log, execute redo(Tk). • For all Tk in T that have no <Tk commits> record in the log, execute undo(Tk).

  47. Concurrent Atomic Transactions • Serializability can be maintained by simply executing each transaction within a CS ... too restrictive. • We can allow transactions to overlap their execution, while maintaining serializability... concurrency-control algorithms

  48. T0 T1 read(A) write(A) read(B) write(B) read(A) write(A) read(B) write(B) Serial Schedule § 7.9.4.1 • Schedule 1: • A schedule where each transaction is executed atomically is called a serial schedule. • For a set of n transactions, there exist n! different valid serial schedules. • Each serial schedule is correct.

  49. Nonserial Schedule • If two transactions are allowed to overlap their execution... nonserial schedule. • A nonserial schedule does not necessarily incorrect.

  50. T0 T1 read(A) write(A) read(A) write(A) read(B) write(B) read(B) write(B) Nonserial Schedule • Two consective operation O1 and O2 of Ti and Tj are conflict if they access the same data item and at least one of these operations is a write operation. • Schedule 2:

More Related