1 / 81

Chapter 7: Process Synchronization

Chapter 7: Process Synchronization. Background The Critical-Section Problem Synchronization Hardware Semaphores Classical Problems of Synchronization Critical Regions Monitors Message Passing Synchronization in Solaris 2 & Windows 2000. Background.

ron
Download Presentation

Chapter 7: Process Synchronization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 7: Process Synchronization • Background • The Critical-Section Problem • Synchronization Hardware • Semaphores • Classical Problems of Synchronization • Critical Regions • Monitors • Message Passing • Synchronization in Solaris 2 & Windows 2000 Operating System Concepts

  2. Background • Concurrent access to shared data may result in data inconsistency. • Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes. • Shared-memory solution to bounded-butter problem (Chapter 4) allows at most n – 1 items in buffer at the same time. A solution, where all N buffers are used is not simple. • Suppose that we modify the producer-consumer code by adding a variable counter, initialized to 0 and incremented each time a new item is added to the buffer Operating System Concepts

  3. Bounded-Buffer • Shared data #define BUFFER_SIZE 10 typedef struct { . . . } item; item buffer[BUFFER_SIZE]; int in = 0; int out = 0; int counter = 0; Operating System Concepts

  4. Bounded-Buffer • Producer process item nextProduced; while (1) { while (counter == BUFFER_SIZE) ; /* do nothing */ buffer[in] = nextProduced; in = (in + 1) % BUFFER_SIZE; counter++; } Operating System Concepts

  5. Bounded-Buffer • Consumer process item nextConsumed; while (1) { while (counter == 0) ; /* do nothing */ nextConsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; counter--; } Operating System Concepts

  6. Bounded Buffer • The statementscounter++;counter--;must be performed atomically. • Atomic operation means an operation that completes in its entirety without interruption. Operating System Concepts

  7. Bounded Buffer • The statement “count++” may be implemented in machine language as:register1 = counter register1 = register1 + 1counter = register1 • The statement “count—” may be implemented as:register2 = counterregister2 = register2 – 1counter = register2 Operating System Concepts

  8. Bounded Buffer • If both the producer and consumer attempt to update the buffer concurrently, the assembly language statements may get interleaved. • Interleaving depends upon how the producer and consumer processes are scheduled. Operating System Concepts

  9. Bounded Buffer • Assume counter is initially 5. One interleaving of statements is:producer: register1 = counter (register1 = 5)producer: register1 = register1 + 1 (register1 = 6)consumer: register2 = counter (register2 = 5)consumer: register2 = register2 – 1 (register2 = 4)producer: counter = register1 (counter = 6)consumer: counter = register2 (counter = 4) • The value of count may be either 4 or 6, where the correct result should be 5. Operating System Concepts

  10. Race Condition • Race condition: The situation where several processes access – and manipulate shared data concurrently. The final value of the shared data depends upon which process finishes last. • To prevent race conditions, concurrent processes must be synchronized. Operating System Concepts

  11. The Critical-Section Problem • n processes all competing to use some shared data • Each process has a code segment, called critical section, in which the shared data is accessed. • Each process must request permission to enter its critical section. The section of code implementing this request is the entry section. • The critical section may be followed by an exit section. • The remaining code is the remainder section. • Problem – ensure that when one process is executing in its critical section, no other process is allowed to execute in its critical section. Operating System Concepts

  12. Solution to Critical-Section Problem 1. Mutual Exclusion. If process Pi is executing in its critical section, then no other processes can be executing in their critical sections. 2. Progress. If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely. 3. Bounded Waiting. A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted (No starvation). • Assume that each process executes at a nonzero speed • No assumption concerning relative speeds of processes or the number of processors (CPUs). Operating System Concepts

  13. Initial Attempts to Solve Problem • Only 2 processes, P0 and P1 • General structure of process Pi(other process Pj) do { entry section critical section exit section reminder section } while (1); • Processes may share some common variables to synchronize their actions. Operating System Concepts

  14. Algorithm 1 – Strict Alteration • Shared variables: • int turn;initially turn = 0 • turn - i Pi can enter its critical section • Process Pi do { while (turn != i) ; critical section turn = j; reminder section } while (1); • Satisfies mutual exclusion, but not progress Operating System Concepts

  15. Algorithm 1 – Strict Alteration • This solution may violate progress requirement. Since the processes must strictly alternate entering their critical sections, a process wanting to enter its critical section twice in a row will be blocked until the other process decides to enter (and leave) its critical section as shown in the the table below. • The solution of strict alteration is shown in Ex5.c. Be sure to note the way shared memory is allocated using shmget and shmat. Operating System Concepts

  16. Algorithm 2 – Shared locks • Shared variables • boolean flag[2];initially flag [0] = flag [1] = false. • flag [i] = true Pi ready to enter its critical section • Process Pi do { flag[i] := true; while (flag[j]) ; critical section flag [i] = false; remainder section } while (1); • Satisfies mutual exclusion, but not progress requirement. If flag[0] and flag[1] are both set to be true, both processes are looping forever. Operating System Concepts

  17. Algorithm 3 – Peterson’s Solution • Combined shared variables of algorithms 1 and 2. • Process Pi do { flag [i]:= true; turn = j; while (flag [j] and turn = j) ; critical section flag [i] = false; remainder section } while (1); • Meets all three requirements; solves the critical-section problem for two processes. • Unfortunately, this solution involves busy waiting in the while loop. Busy waiting can lead to problems we will discuss below. Operating System Concepts

  18. Bakery Algorithm – Extension of Peterson’s solution to n processes • Before entering its critical section, process receives a number. Holder of the smallest number enters the critical section. • If processes Pi and Pj receive the same number, if i < j, then Pi is served first; else Pj is served first. • The numbering scheme always generates numbers in increasing order of enumeration; i.e., 1,2,3,3,3,3,4,5... Critical section for n processes Operating System Concepts

  19. Bakery Algorithm • Notation < lexicographical order (ticket #, process id #) • (a,b) < (c,d) if a < c or if a = c and b < d • max (a0,…, an-1) is a number, k, such that k ai for i - 0, …, n – 1 • Shared data boolean choosing[n]; int number[n]; Data structures are initialized to false and 0 respectively Operating System Concepts

  20. Bakery Algorithm do { choosing[i] = true; number[i] = max(number[0], number[1], …, number [n – 1])+1; choosing[i] = false; for (j = 0; j < n; j++) { while (choosing[j]) ; while ((number[j] != 0) && ((number[j] < number[i]) || (number[j] == number[i] && j < i))) ; } critical section number[i] = 0; remainder section } while (1); Operating System Concepts

  21. Synchronization Hardware • The critical-section problem could be solved simply in a uniprocessor environment if we could forbid interrupts to occur while a shared variable is being modified. This solution is not feasible in a multiprocessor environment. • Test and modify the content of a word atomically. boolean TestAndSet(boolean &target) { boolean rv = target; tqrget = true; return rv; } Operating System Concepts

  22. Mutual Exclusion with Test-and-Set • Shared data: boolean lock = false; • Process Pi do { while (TestAndSet(lock)) ; critical section lock = false; remainder section } Operating System Concepts

  23. Synchronization Hardware • Atomically swap two variables. void Swap(boolean &a, boolean &b) { boolean temp = a; a = b; b = temp; } Operating System Concepts

  24. Mutual Exclusion with Swap • Shared data (initialized to false): boolean lock; boolean waiting[n]; • Process Pi do { key = true; while (key == true) Swap(lock,key); critical section lock = false; remainder section } • These two solutions in Figure 7.7 and Figure 7.9 don’t satisfy the bounded-waiting requirement. • The algorithm in Figure 7.10 satisfies all the critical-section requirement. Operating System Concepts

  25. Mutual Exclusion Solutions • Disable interrupts. • Shared lock variable. • Strict alteration. See Ex6.c. See the manual entry for shared memory allocation: man shmget and Ex6.c and programs in shm directory. • Peterson's solution [1981]. Challenge: modify the code in Ex6.c to implement Peterson's solution for 2 processes. Note several variables are shared: shared int turn = 0; shared int flag[2] = {FALSE,FALSE}; • Hardware solution: Test-and-Set Locks (TSL). Operating System Concepts

  26. Mutual Exclusion Solutions • NOTE: The last two solutions, 4 and 5, require BUSY-WAITING; that is, a process executing the entry code will sit in a tight loop using up CPU cycles, testing some condition over and over, until it becomes true. • For example, in 5, in the enter_region code, a process keeps checking over and over to see if the flag has been set to 0. • Busy-waiting may lead to the PRIORITY-INVERSION PROBLEM if simple priority scheduling is used to schedule the processes. Operating System Concepts

  27. Mutual Exclusion with Swap • Example: Test-and-set Locks: P0 (low) - in cs -x | context switch | P1 (high) -----TestAndSet... x-... forever. • Note, since priority scheduling is used, P1 will keep getting scheduled and waste time doing busy-waiting. :-( • Thus, we have a situation in which a low-priority process is blocking a high-priority process, and this is called PRIORITY-INVERSION. Operating System Concepts

  28. Semaphores [E. W. Dijkstra, 1965] • The solutions presented in the previous section are not easy to generalize to more complex problems. To overcome this difficulty, we can use a synchronization tool called a semaphore. • Synchronization tool that does not require busy waiting. • A semaphore S is an integer variable that can only be accessed via two indivisible (atomic) operations wait (S): while S 0 do no-op;S--; signal (S): S++; Note: These operations were originally termed P (wait, proberen) and V (signal, verhogen). In order to distinguish with the C/UNIX function wait and signal, in our programming examples we use DOWN and UP to indicate wait and signal respectively. Operating System Concepts

  29. Critical Section of n Processes • We can use semaphores to deal with the n-process critical-section problem as shown Fig7-11.c. • Shared data: semaphore mutex; //initially mutex = 1 • Process Pi: do { wait(mutex);critical section signal(mutex); remainder section} while (1); Operating System Concepts

  30. Semaphore Implementation • The original semaphore definition suffers from the similar busy waiting as the previous mutual-exclusion solutions. • This type of semaphore is also called spinlock because the process spins while waiting for the lock. • The advantage of a spinlock is that on context switch is required when a process must wait on a lock, and a context switch may take considerable time. They are useful in multiprocessor systems. • To overcome the need for busy waiting, we can modify the definition of the wait and signal semaphore operations. Operating System Concepts

  31. Semaphore Implementation • Instead busy waiting, the process can block itself and is placed in a waiting queue. It is restarted by a wakeup operation. • Define a semaphore as a record typedef struct { int value; struct process *L; } semaphore; • Each semaphore has an integer value and a list of processes. • Assume two simple operations: • block suspends the process that invokes it. • wakeup(P) resumes the execution of a blocked process P. Operating System Concepts

  32. Implementation • There are 2 operations on semaphores, wait/DOWN and signal/UP. These operations must be executed atomically (that is in mutual exclusion). Suppose that P is the process making the system call. • Semaphore operations now defined as wait(S): S.value--; if (S.value < 0) { add this process to S.L; block(P); } (a) enqueue the pid of P in S.L, (b) block process P (remove the pid from the ready queue), and (c) pass control to the scheduler. Operating System Concepts

  33. Implementation • Semaphore operations now defined as signal(S): S.value++; if (S.value <= 0) { remove a process P from S.L; wakeup(P); } (a) remove a pid from S.L (the pid of P), (b) put the pid in the ready queue, and (c) pass control to the scheduler. Operating System Concepts

  34. Implementation • The list of waiting processes can be implemented by a link field in each process control block (PCB). • This list can use a FIFO to ensure bounded waiting. • However, the list may use any queueing strategy. • We must guarantee that no two processes can execute wait and signal operations on the same semaphore at the same time. • In a uniprocessor environment, we can simply inhibit interrupts during the time the wait and signal operations are executing. • In a multiprocessor environment, inhibiting interrupts does not work. If the hardware does not provide any special instructions, we can employ any of the correct busy-waiting solutions to the critical sections. Operating System Concepts

  35. Semaphore as a General Synchronization Tool • Execute B in Pj only after A executed in Pi • Use semaphore flag initialized to 0 • Code: Pi Pj   Await(flag) signal(flag) B Operating System Concepts

  36. Deadlock and Starvation • Deadlock – two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes. • Let S and Q be two semaphores initialized to 1 P0P1 wait(S); wait(Q); wait(Q); wait(S);   signal(S); signal(Q); signal(Q) signal(S); • The events with which we are mainly concerned here are resource acquisition and release. • Starvation – indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended. Operating System Concepts

  37. Two Types of Semaphores • Counting semaphore – integer value can range over an unrestricted domain. • Binary semaphore – integer value can range only between 0 and 1; can be simpler to implement. • Can implement a counting semaphore S as a binary semaphore. Operating System Concepts

  38. Implementing S as a Binary Semaphore • Data structures: binary-semaphore S1, S2; int C: • Initialization: S1 = 1 S2 = 0 C = initial value of semaphore S Operating System Concepts

  39. Implementing S • wait operation wait(S1); C--; if (C < 0) { signal(S1); wait(S2); } signal(S1); • signal operation wait(S1); C ++; if (C <= 0) signal(S2); else signal(S1); Operating System Concepts

  40. Using Semaphores • Mutual Exclusion Problem: semaphore mutex = 1; /* set mutex.count = 1 */ DOWN(mutex); - critical section - UP(mutex); • To see how semaphores are used to eliminate race conditions in Ex4.c, see Ex7.c and sem.h. The library sem.h contains a version of UP(semid) and DOWN(semid) that correspond with UP and DOWN given above. • Another example is shown in Fig7-11.c. Operating System Concepts

  41. Using Semaphores • Process Synchronization: Order process execution: Suppose we have 4 processes: A, B, C, and D. A must finish executing before B and C start. B and C must finish executing before D starts. S1 S2 A ----> B ----> D | ^ | S1 S3 | +-----> C ------+ Then, the processes may be synchronized using semaphores: semaphore S1, S2, S3 = 0,0,0; Operating System Concepts

  42. Using Semaphores • Process Synchronization: Order process execution: Process A: ---------- - do work of A UP(S1); /* Let B or C start */ UP(S1); /* Let B or C start */ Process B: ---------- DOWN(S1); /* Block until A is finished */ - do work of B UP(S2); Operating System Concepts

  43. Using Semaphores Process C: ---------- DOWN(S1); - do work of C UP(S3); Process D: ---------- DOWN(S2); DOWN(S3); - do work of D Operating System Concepts

  44. Using Semaphores • Process Synchronization: Order process execution: Suppose we have 4 processes: A, B, C, and D. A must finish executing before B and C start. B and C must finish executing before D starts. (Another solution) S1 S2 A ----> B ----> D | ^ | S3 S4 | +-----> C ------+ Then, the processes may be synchronized using semaphores: semaphore S1, S2, S3, S4 = 0,0,0,0; Operating System Concepts

  45. Using Semaphores • Process Synchronization: Order process execution: Process A: ---------- - do work of A UP(S1); /* Let B start */ UP(S3); /* Let C start */ Process B: ---------- DOWN(S1); /* Block until A is finished */ - do work of B UP(S2); Operating System Concepts

  46. Using Semaphores Process C: ---------- DOWN(S3); - do work of C UP(S4); Process D: ---------- DOWN(S2); DOWN(S4); - do work of D Operating System Concepts

  47. Classical Problems of Synchronization • These problems are used for testing every newly proposed synchronization scheme. • Bounded-Buffer Problem • Readers and Writers Problem • Dining-Philosophers Problem Operating System Concepts

  48. Bounded-Buffer Problem • Bounded Buffer Problem = Producer-Consumer Problem : Consider a circular buffer that can hold N items. Producers add items to the buffer and Consumers remove items from the buffer. The Producer-Consumer Problem is to restrict access to the buffer so correct executions result. • Shared datasemaphore full, empty, mutex;Initially:not_empty = 0, not_full = n, mutex = 1 Operating System Concepts

  49. Bounded-Buffer Problem Producer Process do { … produce an item in nextp … wait(not_full); wait(mutex); … add nextp to buffer … signal(mutex); signal(not_empty); } while (1); Operating System Concepts

  50. Bounded-Buffer Problem Consumer Process do { wait(not_empty) wait(mutex); … remove an item from buffer to nextc … signal(mutex); signal(not_full); … consume the item in nextc … } while (1); Operating System Concepts

More Related