1 / 40

Chapter 6: Process Synchronization

Chapter 6: Process Synchronization. Objectives The Critical-Section Problem Synchronization Hardware Semaphores Classical Problems of Synchronization Critical Regions Monitors Synchronization in Solaris 2 & Windows 2000. Why synchronization.

reinaldol
Download Presentation

Chapter 6: Process Synchronization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 6: Process Synchronization Objectives • The Critical-Section Problem • Synchronization Hardware • Semaphores • Classical Problems of Synchronization • Critical Regions • Monitors • Synchronization in Solaris 2 & Windows 2000

  2. Why synchronization • When we have multiple processes running at the same time (i.e. concurrently) we know that we must protect them from one another • E.g. protect one process’ memory from access by another process • But, what if we want to have those processes/ threads cooperate to solve a single problem? • E.g. one thread that manages, the mouse and keyboard, another that manages the display and a third that runs your programs • In this case, the processes/threads must be synchronized

  3. Problem with Concurrency • Just like shuffling cards, the instructions of two processes are interleaved arbitrarily • For cooperating processes, the order of some instructions are irrelevant. However, certain instruction combinations must be prevented • For example: Process A Process B concurrent access A = 1; B = 2;does not matter A = B + 1; B = B * 2;important! • A race condition is a situation where two or more processes access shared data concurrently. Result differs depending on who wins the race to it!

  4. A Concurrency example timePerson APerson B 3:00 Look in fridge. Out of milk 3:05 Leave for store. 3:10 Arrive at store. Look in fridge. Out of milk 3:15 Buy milk. Leave for store. 3:20 Leave the store. Arrive at store. 3:25 Arrive home, put milk away. Buy milk. 3:30 Leave the store. 3:35 Arrive home. OH! OH! • Having too much milk isn't a big deal, but in terms of data access there is a problem • New milk/data may overwrite the old data • What about wasted resources? - too much milk

  5. Bounded-Buffer • Producer process item nextProduced; while (1) { while (counter == BUFFER_SIZE) ; /* do nothing */ buffer[in] = nextProduced; in = (in + 1) % BUFFER_SIZE; counter++; } • Shared data#define BUFFER_SIZE 10 typedef struct {. . }item; item buffer[BUFFER_SIZE]; int in = 0; int out = 0; int counter = 0; • Consumer process item nextConsumed; while (1) { while (counter == 0) ; /* do nothing */ nextConsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; counter--; } • The statementscounter++;counter--;must be performed atomically Atomic operation means an operation that completes in its entirety without interruption

  6. Bounded Buffer • If both the producer and consumer attempt to update the buffer concurrently, the assembly language statements may get interleaved • Interleaving depends upon how the producer and consumer processes are scheduled • The statement “count++” may be implemented in machine language as:register1 = counter register1 = register1 + 1counter = register1 • The statement “count--” may be implemented as:register2 = counterregister2 = register2 – 1counter = register2

  7. Bounded Buffer • Assume counter is initially 5. One interleaving of statements is:producer: register1 = counter (register1 = 5)producer: register1 = register1 + 1 (register1 = 6)consumer: register2 = counter (register2 = 5)consumer: register2 = register2 – 1 (register2 = 4)producer: counter = register1 (counter = 6)consumer: counter = register2 (counter = 4) • The value of count may be either 4 or 6, where the correct result should be 5

  8. Race Condition • Race condition: The situation where several processes access and manipulate shared data concurrently. The final value of the shared data depends upon which process finishes last • To prevent race conditions, concurrent processes must be synchronized

  9. The Critical-Section Problem • n processes all competing to use some shared data • Each process has a code segment, called critical section, in which the shared data is accessed • Problem – ensure that when one process is executing in its critical section, no other process is allowed to execute in its critical section

  10. The Critical-Section Problem • A solution to the critical-section problem MUST satisfy: • Mutual Exclusion: At any time, at most one process can be executing critical section (CS) code • Progress: If no process is in its CS and there are one or more processes that wish to enter their CS, it must be possible for those processes to negotiate who will proceed next into CS • No deadlock • no process in its remainder section can participate in this decision • Bounded Waiting: After a process P has made a request to enter its CS, there is a limit on the number of times that the other processes are allowed to enter their CS, before P’s request is granted • Deterministic algorithm, otherwise the process could suffer from starvation

  11. Two-Process Solution to the Critical-Section Problem • Only 2 processes, P0 and P1 • General structure of process Pi(other process Pj) do { entry section critical section (CS) exit section reminder section } while (1); • Processes may share some common variables to synchronize their actions

  12. Peterson’s Solution • Two process solution • Assume that the LOAD and STORE instructions are atomic; that is, cannot be interrupted. • The two processes share two variables: • int turn; • Boolean flag[2] • The variable turn indicates whose turn it is to enter the critical section. • The flag array is used to indicate if a process is ready to enter the critical section. flag[i] = true implies that process Pi is ready!

  13. Two-Process Solution to the Critical-Section Problem --- Peterson’s Solution flag[0],flag[1]:=false turn := 0; Process P0: repeat flag[0]:=true; // 0 wants in turn:= 1; // 0 gives a chance to 1 while(flag[1]&turn=1){}; CS flag[0]:=false; // 0 is done RS forever • The algorithm proved to be correct. Turn can only be 0 or 1 even if both flags are set to true Process P1: repeat flag[1]:=true; // 1 wants in turn:=0; // 1 gives a chance to 0 while(flag[0]&turn=0){}; CS flag[1]:=false; // 1 is done RS forever

  14. Drawbacks of Software Solutions • Complicated to program • Busy waiting (wasted CPU cycles) • It would be more efficient to block processes that are waiting (just as if they had requested I/O) • This suggests implementing the permission/waiting function in the Operating System • But first, let’s look at some hardware approaches

  15. Synchronization Hardware • Many systems provide hardware support for critical section code • Uniprocessors – could disable interrupts • Currently running code would execute without preemption • Generally too inefficient on multiprocessor systems • Operating systems using this not broadly scalable • Modern machines provide special atomic hardware instructions • Atomic = non-interruptable • Either test memory word and set value • Or swap contents of two memory words

  16. Solution using TestAndSet • Shared boolean variable lock., initialized to false while (true) { while ( TestAndSet (&lock )) ; /* do nothing // critical section lock = FALSE; // remainder section } int TestAndSet(int &i) { int rv; rv = *i; *i = 1; return rv; }

  17. Solution using Swap • Shared Boolean variable lock initialized to FALSE; Each process has a local Boolean variable key. • Solution: while (true) { key = TRUE; while ( key == TRUE) Swap (&lock, &key ); // critical section lock = FALSE; // remainder section } • Definition: void Swap (boolean *a, boolean *b) { boolean temp = *a; *a = *b; *b = temp: }

  18. Test-and-Set Instruction • Mutual exclusion is assured: if Pi enters CS, the other processes are busy waiting • Satisfies progress requirement • When Pi exits CS, the selection of the next Pj to enter CS is arbitrary • No bounded waiting ( it is a race!!!) • Some processors (ex: Pentium) provide an atomic Swap(a,b) instruction that swaps the content of a and b

  19. Operating Systems or Programming Language Support for Concurrency • Solutions based on machine instructions such as test and set involve tricky coding • For example, the SetAndTest algorithm does not satisfy all the requirements to solve the critical-section problem • Starvation is possible • We can build better solutions by providing synchronization mechanisms in the Operating System or Programming Language (This leaves the really tricky code to systems programmers)

  20. Semaphores • The classical definition of wait and signal is as shown in the following figures • Useful when critical sections last for a short time, or we have lots of CPUs • S initialized to positive value (to allow someone in at the beginning) • A Semaphore S is an integer variable that, apart from initialization, can only be accessed through 2 atomic and mutually exclusive operations: • wait(S) • sometimes called P() • Dutch proberen: “to test” • signal(S) • sometimes called V() • Dutch verhogen: “to increment” wait(S) { while S<=0 do ; S--; } signal(S) { S++; }

  21. The test-and-decrement sequence in wait must be atomic, but not the loop Signal is atomic No two processes can be allowed to execute atomic sections simultaneously Atomicity in Semaphores wait(S): T S <= 0 atomic F S - -

  22. Semaphores in Action Initialize mutex to 1 Process Pi: repeat wait(mutex); CS signal(mutex); RS forever Process Pj: repeat wait(mutex); CS signal(mutex); RS forever

  23. Semaphore definitions (so far) all require busy waiting This type of semaphore is called spinlock This continual looping is a problem in a multiprogramming setting As a solution, modify the definition of the wait and signal semaphores Semaphores: the Problem of busy waiting • Define a semaphore as a record typedef struct { int value; struct process *L; } semaphore; • Assume two simple operations: • block suspends the process that invokes it. • wakeup(P) resumes the execution of a blocked process P Semaphore operations now defined as wait(S): S.value--; if (S.value < 0) { add this process to S.L; block; } signal(S): S.value++; if (S.value <= 0) { remove a process P from S.L; wakeup(P); }

  24. Synchronizing Processes using Semaphores • Define a semaphore “synch” • Initialize synch to 0 • Put this in P2: • wait(synch); • S2; • And this in in P1: • S1; • signal(synch); • Two processes: • P1 and P2 • Statement S1 in P1 needs to be performed before statement S2 in P2 • We want a way to make P2 wait • Until P1 tells it it is OK to proceed

  25. Deadlock and Starvation • An implementation of a semaphore with a waiting queue may result in: • Deadlock: two or more processes are waiting indefinitely for an event that can be caused only by one of the waiting processes • Let S and Q be two semaphores initialized to 1 P0P1 wait(S); wait(Q); wait(Q); wait(S);   signal(S); signal(Q); signal(Q) signal(S); • Starvation: indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended • If we add or remove processes from the list associated with a semaphore in LIFO manner

  26. Classical Problems of Synchronization: Bounded-Buffer Problem • Shared data: semaphore full, empty, mutex; • Initially: full = 0, empty = n, mutex = 1 • We have n buffers. Each buffer is capable of holding ONE item do { wait(full) wait(mutex); … remove an item from buffer to nextc … signal(mutex); signal(empty); … consume the item in nextc … } while (1); do { … produce an item in nextp … wait(empty); wait(mutex); … add nextp to buffer … signal(mutex); signal(full); } while (1);

  27. Classical Problems of Synchronization: Readers-Writers Problem • Shared data: semaphore mutex, wrt; • Initially: mutex = 1, wrt = 1, readcount = 0 wait(mutex); readcount++; if (readcount == 1) wait(wrt); signal(mutex); … reading is performed … wait(mutex); readcount--; if (readcount == 0) signal(wrt); signal(mutex): wait(wrt); … writing is performed … signal(wrt);

  28. Classical Problems of Synchronization: Dining-Philosophers Problem • Five philosophers spend their lives thinking and eating • They have a circular table and each have a chair • A single bowl of rice is on the table • There is a single chopstick between each pair of philosophers • If a philosopher gets hungry, he tries to pick up the chopsticks on either side of him • The philosophers pick up only one chopstick in a single operation • A mechanism is needed to allocate resources among several processes in a manner that is free of deadlock and starvation

  29. Classical Problems of Synchronization: Dining-Philosophers Problem • Shared data: semaphore chopstick[5]; • Initially all values are 1 • Philosopher i : do { wait(chopstick[i]) wait(chopstick[(i+1) % 5]) … eat … signal(chopstick[i]); signal(chopstick[(i+1) % 5]); … think … } while (1); • This solution guarantees that no neighbors eat simultaneously • This solution also might cause a deadlock

  30. Incorrect use of Semaphores • Incorrect use of semaphores can result in timing errors that are difficult to detect • For example, suppose that a process interchanges the order in which the wait and signal operations are executed: • In this situation, several processes may be executing in their critical section simultaneously, violating the mutual exclusion requirement • To deal with such errors, a number of high-level language synchronization constructs provided by the systems software (OS kernel and/or language compilers) as programming tools are introduced • Critical regions • Monitors signal(mutex); CS wait(mutex); RS Forever

  31. Monitors • High-level synchronization construct that allows the safe sharing of an abstract data type among concurrent processes • A monitor is characterized by a set of programmer-defined operations monitor monitor-name { shared variable declarations procedure bodyP1(…) { . . . } procedurebodyP2 (…) { . . . } procedure bodyPn(…) { . . . } { initialization code } }

  32. Monitors • To allow a process to wait within the monitor, a condition variable must be declared, as condition x, y; • Condition variable can only be used with the operations wait and signal. • The operation x.wait();means that the process invoking this operation is suspended until another process invokes x.signal(); • The x.signal operation resumes exactly one suspended process. If no process is suspended, then the signal operation has no effect

  33. Monitors Systematic view of a monitor Monitor with condition variables

  34. Dining Philosophers Example void pickup(int i) { state[i] = hungry; test[i]; if (state[i] != eating) self[i].wait(); } void putdown(int i) { state[i] = thinking; // test left and right neighbors test((i+4) % 5); test((i+1) % 5); } monitor dp { enum {thinking, hungry, eating} state[5]; condition self[5]; void pickup(int i) void putdown(int i) void test(int i) void init() { for (int i = 0; i < 5; i++) state[i] = thinking; } } void test(int i) { if ( (state[(i + 4) % 5] != eating) && (state[i] == hungry) && (state[(i + 1) % 5] != eating)) { state[i] = eating; self[i].signal(); } }

  35. Monitor Implementation Using Semaphores • Variables semaphore mutex; // (initially = 1) semaphore next; // (initially = 0) int next-count = 0; • Each external procedure F will be replaced by wait(mutex); … body of F ; … if (next-count > 0) signal(next) else signal(mutex); • Mutual exclusion within a monitor is ensured • Next, we consider how condition variables are implemented…

  36. Monitor Implementation • For each condition variable x, we have: semaphore x-sem; // (initially = 0) int x-count = 0; • The operation x.wait can be implemented as: x-count++; if (next-count > 0) signal(next); else signal(mutex); wait(x-sem); x-count--; • The operation x.signal can be implemented as: • if (x-count > 0) { • next-count++; • signal(x-sem); • wait(next); • next-count--; • }

  37. Monitor Implementation: Process-resumption Order • Conditional-wait construct: x.wait(c); • c– integer expression evaluated when the wait operation is executed • value of c (a priority number) stored with the name of the process that is suspended • when x.signal is executed, process with smallest associated priority number is resumed next • Check two conditions to establish correctness of system: • User processes must always make their calls on the monitor in a correct sequence • Must ensure that an uncooperative process does not ignore the mutual-exclusion gateway provided by the monitor, and try to access the shared resource directly, without using the access protocols

  38. Solaris 2 Synchronization • Implements a variety of locks to support multitasking, multithreading (including real-time threads), and multiprocessing • Uses adaptive mutexes for efficiency when protecting data from short code segments • Uses condition variables and readers-writers locks when longer sections of code need access to data • Uses turnstiles (a queue structure containing threads blocked on a lock) to order the list of threads waiting to acquire either an adaptive mutex or reader-writer lock

  39. Windows 2000 Synchronization • Uses interrupt masks to protect access to global resources on uniprocessor systems • Uses spinlocks on multiprocessor systems • Also provides dispatcher objects which may be used as mutexes and semaphores • Dispatcher objects may also provide events • An event acts much like a condition variable

  40. Synchronization Primitives — Summary SHARED MEMORY NO SHARED MEMORY monitors critical regions message passing remote procedure calls sockets high level Discussed in Section 3.4! semaphores low level load/store interrupt disable/enable test-and-set hardware

More Related