1 / 69

Chapter 7: Process Synchronization

Chapter 7: Process Synchronization. Contents. Background The Critical-Section Problem Synchronization Hardware Semaphores Classical Problems of Synchronization Monitors Java Synchronization Synchronization in Solaris 2 Synchronization in Windows NT. § 7.1. Background.

Download Presentation

Chapter 7: Process Synchronization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 7: Process Synchronization

  2. Contents • Background • The Critical-Section Problem • Synchronization Hardware • Semaphores • Classical Problems of Synchronization • Monitors • Java Synchronization • Synchronization in Solaris 2 • Synchronization in Windows NT

  3. § 7.1 Background • Concurrent access to shared data may result in data inconsistency. • Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes. • Shared-memory solution to bounded-buffer problem (Chapter 4) has a race condition on the class data count

  4. Bounded Buffer Shared data #define BUFFER_SIZE 10 typedef struct { . . . } item; item buffer[BUFFER_SIZE]; int in = 0; int out = 0; int counter = 0;

  5. Producer process item nextProduced; while (1) { while (counter == BUFFER_SIZE) ; /* do nothing */ buffer[in] = nextProduced; in = (in + 1) % BUFFER_SIZE; counter++; }

  6. Consumer process item nextConsumed; while (1) { while (counter == 0) ; /* do nothing */ nextConsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; counter--; }

  7. Bounded Buffer • The statementscounter++;counter--;must be performed atomically. • Atomic operation means an operation that completes in its entirety without interruption.

  8. In machine language • count++register1 = count;register1= register1 + 1;count = register1 • count--register2 = count;register2= register2 – 1;count = register2 • The concurrent execution of the statements count++ and count-- is equivalent to a sequential execution where the lower-level statements are interleaved in some arbitrary order.

  9. Interleaving • If both the producer and consumer attempt to update the buffer concurrently, the assembly language statements may get interleaved. • Interleaving depends upon how the producer and consumer processes are scheduled.

  10. Race Condition T0: producer execute rigister1 := count {register1=5} T1: producer execute rigister1 := rigister1 +1 {register1=6} T2: consumer execute rigister2 := count {register2=5} T3: consumer execute rigister2 := rigister2 –1 {register2=4} T4: producer execute count := rigister1{count=6} T5: consumer execute count := rigister2{count=4} Should be 5. Error!

  11. Race Condition • Race condition: The situation where several processes access – and manipulate shared data concurrently. The final value of the shared data depends upon which process finishes last. • To guard against the race condition, we need to ensure that only one thread at a time can be manipulating the variable count. To make such a guarantee, we require some form of synchronization of the processes.

  12. Critical-Section Problem § 7.2 • n processes all competing to use some shared data • Each process has a code segment, called critical section, in which the in which the thread may be changing common variables, updating a table, writing a file, and so on. • Problem – ensure that when one process is executing in its critical section, no other process is allowed to execute in its critical section... mutually exclusive. • The critical-section problem is how to choose a protocol that the threads can use to cooperate.

  13. Critical-Section Problem § 7.2 ( ) When one process is executing in its critical section, no other process is to be allowed to execute in its critical section, this is called _________. (A) global data (B) common method (C) mutually exclusive (D) constant accessing (E) value assertion • Each thread has a segment of code, called a critical section, in which the thread may be changing common variables, updating a table, writing a file, and so on. • When one thread is executing in its critical section, no other thread is to be allowed to execute in its critical section…mutually exclusive. • The critical-section problem is how to choose a protocol that the threads can use to cooperate. Answer: C

  14. Critical-Section Problem 1. Mutual Exclusion. If process Pi is executing in its critical section, then no other processes can be executing in their critical sections. 2. Progress. If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely. 3. Bounded Waiting. A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.

  15. Critical-Section Problem Short Answer Question: A solution to the critical-section problem must satisfy three requirements: mutual exclusive, progress, and bounded waiting. Please explain the meaning of these requirements. 1. Mutual Exclusion. If process Pi is executing in its critical section, then no other processes can be executing in their critical sections. 2. Progress. If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely. 3. Bounded Waiting. A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted. • Assume that each process executes at a nonzero speed • No assumption concerning relative speed of the n processes.

  16. Critical-Section Problem Multiple Choice Question: ( ) Which of the following is not one of the requirements for a solution of critical-section problem? (A) progress (B) loopless (C) mutual exclusive (D) bounded waiting 1. Mutual Exclusion. If process Pi is executing in its critical section, then no other processes can be executing in their critical sections. 2. Progress. If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely. 3. Bounded Waiting. A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted. • Assume that each process executes at a nonzero speed • No assumption concerning relative speed of the n processes.

  17. Critical-Section Problem • We can make no assumption concerning the relative speed of the n threads. • The solutions should not rely on any assumptions concerning the hardware instructions. • However, the basic machine-language instructions (the primitive instructions such as load, store, and test) are executed atomically.

  18. Critical-Section Problem True-False Question: ( ) When solving critical section problems, we should make proper assumption concerning the relative speed of the threads. • We can make no assumption concerning the relative speed of the n threads. • The solutions should not rely on any assumptions concerning the hardware instructions. • However, the basic machine-language instructions (the primitive instructions such as load, store, and test) are executed atomically. Answer: ×

  19. Two-Process Solutions • Only 2 processes, P0 and P1 • General structure of process Pi do { entry section critical section exit section reminder section } while (1); • Use Pj to denote the other process. (j == 1 – i) • Processes may share some common variables to synchronize their actions.

  20. Algorithm 1 • Shared variables: int turn;initially turn = 0 turn == i Pi can enter its critical section • Process Pi do { while (turn != i) ; critical section turn = j; reminder section } while (1); • Satisfies mutual exclusion, but not progress

  21. Algorithm 1 • Ensures that only one process at a time can be in its critical section. • However, does not satisfy the progress requirement, since it requires strict alternation of processes in the execution of their critical section.

  22. Algorithm 2 • Shared variables • boolean flag[2];initially flag [0] = flag [1] = false. • flag [i] = true Pi ready to enter its critical section • Process Pi do { flag[i] := true; while (flag[j]);critical section flag [i] = false; remainder section } while (1); • Satisfies mutual exclusion, but not progress requirement.

  23. Algorithm 2 Short Answer Question: Examine the algorithm on the right for solving two-tasks critical-section problem. What this algorithm has achieved and what kind of problem this algorithm is encountered, if any ? • Shared variables • boolean flag[2];initially flag [0] = flag [1] = false. • flag [i] = true Pi ready to enter its critical section • Process Pi do { flag[i] := true; while (flag[j]);critical section flag [i] = false; remainder section } while (1); • Satisfies mutual exclusion, but not progress requirement.

  24. Algorithm 2 • The mutual-exclusion requirement is satisfied. • Unfortunately, the progress requirement is still not met. • Consider the following execution sequence: T0: P0 sets flag[0] = true T1: P1 sets flag[1] = trueNow P0 and P1 are looping forever in their respective while statements.

  25. Algorithm 3 • Combined shared variables of algorithms 1 and 2. • Process Pi do { flag [i]:= true; turn = j; while (flag [j] and turn = j) ; critical section flag [i] = false; remainder section } while (1); • Meets all three requirements; solves the critical-section problem for two processes.

  26. Algorithm 3 • Correct solution to the critical-section problem. (how to prove it? look at the textbook.) • If both processes try to enter at the same time, turn is set to both i and j at roughly the same time. • Only one of these assignments lasts; the other will occur, but will be overwritten immediately. • The eventual value of turn decides which of the two processes is allowed to enter its critical section first.

  27. Bakery Algorithm Critical section for n processes • Before entering its critical section, process receives a number. Holder of the smallest number enters the critical section. • If processes Pi and Pj receive the same number, if i < j, then Pi is served first; else Pj is served first. • The numbering scheme always generates numbers in increasing order of enumeration; i.e., 1,2,3,3,3,3,4,5...

  28. Bakery Algorithm • Notation : lexicographical order (ticket #, process id #) • (a,b) < (c,d) if a < c or if a = c and b < d • max (a0,…, an-1) is a number, k, such that k ai for i - 0, …, n– 1 • Shared data boolean choosing[n]; int number[n]; Data structures are initialized to false and 0 respectively

  29. Bakery Algorithm do { choosing[i] = true; number[i] = max(number[0], number[1], …, number [n–1])+1; choosing[i] = false; for (j = 0; j < n; j++) { while (choosing[j]) ; while ((number[j] != 0) && ((number[j],j) < (number[i],i))) ; } critical section number[i] = 0; remainder section } while (1);

  30. Synchronization Hardware § 7.3 • Hardware instructions can be used effectively in solving critical-section problem. • It allows as either to test and modify the content of a word, or to swap the contents of two words, atomically--- as one uninterruptible unit.

  31. Test-and-Set Instruction • Test and modify the content of a word atomically. boolean TestAndSet(boolean &target) { boolean rv = target; target = true; return rv; }

  32. Test-and-Set Instruction True-False Question: ( ) The value of the lock returned by the TestAndSetinstruction is the value after the instruction is applied onto the target. • Test and modify the content of a word atomically. boolean TestAndSet(boolean &target) { boolean rv = target; tqrget = true; return rv; } Answer: X

  33. Mutual Exclusion with Test-and-Set • Shared data: boolean lock = false; • Process Pi do { while (TestAndSet(lock)); critical section lock = false; remainder section }

  34. Mutual Exclusion with Test-and-Set Multiple Choices Question: ( ) When calling a test-and-set instruction, a ______ must be provided as its parameter.(a) semaphore (b) integer (c) constant (d) lock • Shared data: boolean lock = false; • Process Pi do { while (TestAndSet(lock)) ; critical section lock = false; remainder section } Answer: d

  35. Mutual Exclusion with Test-and-Set True-False Question: ( ) When the value returned by the test-and-set instruction is false, it means that the locking is not successful. • Shared data: boolean lock = false; • Process Pi do { while (TestAndSet(lock)) ; critical section lock = false; remainder section } Answer: x

  36. Synchronization Hardware • Atomically swap two variables. void Swap(boolean &a, boolean &b) { boolean temp = a; a = b; b = temp; }

  37. Mutual Exclusion with Swap • Shared data (initialized to false): boolean lock; boolean waiting[n]; • Process Pi do { key = true; while (key == true) Swap(lock,key); critical section lock = false; remainder section }

  38. Mutual Exclusion with Swap Multiple Choices Question: ( ) The value of the key used to switch with the lock must be set to ______ before calling the swap method.(a) constant (b) false (c) true (d) integer • Shared data (initialized to false): boolean lock; boolean waiting[n]; • Process Pi do { key = true; while (key == true) Swap(lock,key); critical section lock = false; remainder section } Answer: c

  39. Semaphore § 7.4 • Previous solutions (§7.3) are not easy to generalize to more complex problems. • Semaphores can be implemented as synchronization tool that does not require busy waiting. • A semaphoreS is an integer variable that can only be accessed via two indivisible (atomic) operations: P and V. P(S) { while S 0 ; // no-op;S--; } These are classical definition of P and V, they require busy waiting. V(S) { S++; }

  40. Semaphore § 7.4 • Previous solutions (§7.3) are not easy to generalize to more complex problems. • Semaphores can be implemented as synchronization tool that does not require busy waiting. • A semaphoreS is an integer variable that can only be accessed via two indivisible (atomic) operations: P and V. P(S) { while S 0 ; // no-op;S--; } New definition has changed to wait and signal. V(S) { S++; }

  41. Semaphore § 7.4 • Previous solutions (§7.3) are not easy to generalize to more complex problems. • Semaphores can be implemented as synchronization tool that does not require busy waiting. • A semaphoreS is an integer variable that can only be accessed via two indivisible (atomic) operations: P and V. wait(S) { while S 0 ; // no-op;S--; } New definition has changed to wait and signal. signal(S) { S++; }

  42. Solving critical-section problem § 7.4.1 • Shared data: semaphore mutex; //initially mutex = 1 Process Pi: do { wait(mutex);critical section signal(mutex); remainder section} while (1);

  43. Solving Synchronization problems • P1 with statement S1, P2 with statement S2 and S2 can be executed only after S1 has completed. • Shared data: semaphore mutex; //initially mutex = 1 • insert the statementsS1;signal(synch);in process P1, and the statementswait(synch);S2;in process P2

  44. Semaphore Eliminating Busy-Waiting § 7.4.2 • The main disadvantage of previous solutions: they all require busy waiting…a problem in single CPU, multiprogramming system. • Busy waiting wastes CPU cycles that some other process might be able to use productively. This type of semaphore is also called a spinlock. • Spinlocks are useful in multiprocessor systems. The • advantage of a spinlock is that no context switch is required • when a process must wait on a lock (context switch • may take considerable time). • Spinlocks are useful when locks are expected to be held for • short times.

  45. Semaphore Eliminating Busy-Waiting • The main disadvantage of previous solutions: they all require busy waiting…a problem in single CPU, multiprogramming system. • Busy waiting wastes CPU cycles that some other process might be able to use productively. This type of semaphore is also called a spinlock. • To overcome the need for busy waiting, the definition of wait and signal are modified.

  46. Semaphore Eliminating Busy-Waiting Short Answer Question: Although spinlocks waist CPU time with busy waiting, they are still useful in some systems. What are the advantage of spinlocks and in which situation they are considered useful? • The main disadvantage of previous solutions: they all require busy waiting…a problem in single CPU, multiprogramming system. • Busy waiting wastes CPU cycles that some other process might be able to use productively. This type of semaphore is also called a spinlock. • To overcome the need for busy waiting, the definition of wait and signal are modified...

  47. Semaphore Eliminating Busy-Waiting • Rather than busy waiting, the process can block itself. • The block operation places a process into a waiting queue associated with the semaphore, and the state of the process is switched to the waiting state. • A process that is blocked, waiting on S, should be restarted when some other process execute a signal operation. • When the process is restarted by the wakeup operation, the process is changed from the waiting state to the ready state. The process is then placed in the ready gueue.

  48. Semaphore Eliminating Busy-Waiting • Define a semaphore as a record typedef struct { int value; struct process *L; } semaphore; • Assume two simple operations: • block suspends the process that invokes it. • wakeup(P) resumes the execution of a blocked process P. • These two operations are provided by the OS as basic system calls.

  49. Semaphore Eliminating Busy-Waiting • Semaphore operations now defined as void wait (semaphore S) { S.value--; if (S.value < 0) { add this process to S.L; block; } } void signal(semaphore S) { S.value++; if (S.value <= 0) { remove a process Pfrom S.L; wakeup(P); } }

  50. Deadlock and Starvation the execution of a V operation § 7.4.3 • Deadlock – two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes. • Let S and Q be two semaphores initialized to 1 P0P1 wait(S);wait (Q); wait (Q);wait (S);   signal(S);signal (Q); signal (Q)signal (S); • Starvation – indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended.

More Related