1 / 78

Chapter 5: Process Synchronization

Chapter 5: Process Synchronization. Chapter 5: Process Synchronization. Background The Critical-Section Problem Peterson ’ s Solution Synchronization Hardware Mutex Locks Semaphores Classic Problems of Synchronization Monitors Synchronization Examples Alternative Approaches.

Download Presentation

Chapter 5: Process Synchronization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 5: Process Synchronization

  2. Chapter 5: Process Synchronization • Background • The Critical-Section Problem • Peterson’s Solution • Synchronization Hardware • Mutex Locks • Semaphores • Classic Problems of Synchronization • Monitors • Synchronization Examples • Alternative Approaches

  3. Objectives • To present the concept of process synchronization. • To introduce the critical-section problem, whose solutions can be used to ensure the consistency of shared data • To present both software and hardware solutions of the critical-section problem • To examine several classical process-synchronization problems • To explore several tools that are used to solve process synchronization problems

  4. Background • Processes can execute concurrently • May be interrupted at any time, partially completing execution • Concurrent access to shared data may result in data inconsistency • Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes • Illustration of the problem:Suppose that we wanted to provide a solution to the consumer-producer problem that fills allthe buffers. We can do so by having an integer counterthat keeps track of the number of full buffers. Initially, counteris set to 0. It is incremented by the producer after it produces a new buffer and is decremented by the consumer after it consumes a buffer.

  5. Producer while (1) { while (counter == BUFFER_SIZE) ; // do nothing // produce an item and put in nextProduced buffer[in] = nextProduced; in = (in + 1) % BUFFER_SIZE; counter++; }

  6. Consumer while (1) { while (counter == 0) ; // do nothing nextConsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; counter--; // consume the item in nextConsumed }

  7. Updating of Shared Variable • counter++could be implemented asregister1 = counter register1 = register1 + 1 counter = register1 • counter-- could be implemented asregister2 = counter register2 = register2 - 1 counter = register2

  8. Possible Execution Interleaving register1 = counter register2 = counterregister1 = register1 + 1 register2 = register2 - 1 counter = register1 counter = register2 • Consider this execution interleaving: S0: producer executes register1 = counter {register1 = 5} S1: producer executes register1 = register1 + 1 {register1 = 6} S2: consumer executes register2 = counter {register2 = 5} S3: consumer executes register2 = register2 - 1 {register2 = 4} S4: producer executes counter = register1 {counter = 6 } S5: consumer executes counter = register2 {counter = 4}

  9. Race Condition • A situation where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place • Bottom level indivisible operation is architecture dependent. Typically, it is whatever takes place in one CPU cycle. Everything else can be divided • Lowest level atomic operation is called memory interlock or hardware arbiter. Everything else is built on top of that

  10. Critical Section • In order to avoid having these unpredictable situations we need some way of synchronizing (establishing order) processes at their point of interaction • The segment of code in which the process may be changing common variables, updating a table, writing a file, and so on (i.e., segment of code containing at least one shared variable) • When one process is executing in its critical section, no other process should be allowed to execute in its critical section. That is, no two processes should be allowed to execute in their critical sections at the same time

  11. Critical Section • Critical sections are used to artificially create indivisible operations • The critical section problem is to design a protocol that processes can use to cooperate. Each process must request permission to enter its critical section

  12. General Structure of a Process do { [entry section] critical section [exit section] remainder section } while (TRUE);

  13. Solution to Critical-Section Problem • Mutual Execution is prohibited - If processPi is executing in its critical section, then no other processes can be executing in their critical sections ESSENTIAL • Blocking is prohibited – A process that is not in its critical section must not prevent another process from getting into the critical section (i.e., it is not turn-taking) FOR EFFICIENCY • Indefinite Blocking is prohibited – The decision as to which process must enter its critical section next should be finite (i.e., no after-you syndrome) FOR EFFICIENCY • Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted

  14. Solution to Critical Section Problem • Assume that each process executes at a nonzero speed • No assumption concerning relative speed of the n processes

  15. Initial Attempts to Solve Problem • Only 2 processes, Piand Pj • General structure a process: do { entry section critical section exit section remainder section } while (TRUE); • Processes may share some common variables to synchronize their actions

  16. Algorithm 1 • Shared variables: • int turn;initially turn = i • turn = i Pican enter its critical section Process Pi Process Pj do { do { while (turn != i) ; while (turn != j) ; critical section critical section turn = j; turn = i; remainder section remainder section } while (TRUE); } while (TRUE); • Satisfies mutual exclusion, but blocking is not prohibited

  17. Algorithm 2 • Shared variables • boolean flag[2];initially flag [0] = flag [1] = false • flag [i] = true Piready to enter its critical section ProcessPi ProcessPj do { do { flag[i] := true; flag[j] := true; while (flag[j]) ; while (flag[i]) ; critical section critical section flag [i] = false; flag [j] = false; remainder section remainder section } while (TRUE); } while (TRUE); • Satisfies mutual exclusion, but indefinite blocking is not prohibited

  18. Solution to Critical-Section Problem • Mutual Execution is prohibited - If processPi is executing in its critical section, then no other processes can be executing in their critical sections ESSENTIAL • Blocking is prohibited – A process that is not in its critical section must not prevent another process from getting into the critical section (i.e., it is not turn-taking) FOR EFFICIENCY • Indefinite Blocking is prohibited – The decision as to which process must enter its critical section next should be finite (i.e., no after-you syndrome) FOR EFFICIENCY • Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted

  19. Algorithm 3 • Combined shared variables of algorithms 1 and 2 Process Pi Process Pj do { do { flag [i]:= true; flag [j]:= true; turn = j; turn = i; while (flag [j] and turn = j); while (flag [i] and turn = i); critical section critical section flag [i] = false; flag [j] = false; remainder section remainder section } while (TRUE); } while (TRUE); • Meets all requirements; solves the critical-section problem for two processes

  20. Peterson’s: Proof of Correctness • Mutual exclusion holds since: • For both P0 and P1 to be in their CS • both flag[0] and flag[1] must be true and: • turn=0 and turn=1 (at same time): impossible

  21. Proof (“Progress”) • Progress requirement: • Pi can be kept out of CS only if stuck in while loop • flag[j] = true and turn = j. • If Pj not ready to enter CS then flag[j] = false • Pi can then enter its CS • If Pj has set flag[j], it is also in its while loop, then either Pi or Pj will go depending on value of turn • Therefore the progress condition is met

  22. Proof (“Bounded Waiting”) • Suppose Pj gets to go this time • Can it go a second time without letting Pi go? • If Pj enters CS , then turn=j • but will then reset flag[ j]=false on exit: • allowing Pi to enter CS • What if Pj tries again, and has time to reset flag[j]=true before Pi gets to its CS? • It must also set turn=i • since Pi is (stuck) past the point where it sets turn= j: • Pi will get to enter CS • after at most one CS entry by Pj Process Pi: repeat flag[i]:=true; // I want in turn:=j; // but you can go first! while(flag[j]&& turn==j) ; //(loop) CS flag[i]:=false; // I’m done RS forever

  23. Bakery Algorithm • Critical section problem for n processes • Before entering its critical section, process receives a number. Holder of the smallest number enters the critical section • If processes Piand Pjreceive the same number, if i < j, then Pi is served first; else Pjis served first • The numbering scheme always generates numbers in increasing order of enumeration; i.e., 1,2,3,3,3,3,4,5...

  24. Bakery Algorithm • Notation < lexicographical order (ticket #, process id #) (a,b) < (c,d) if a < c or if a = c and b < d • Shared data boolean choosing[n]; int number[n]; Data structures are initialized to false and 0 respectively

  25. Bakery Algorithm Creating a number (first part of ticket) Awaiting for permission to enter CS

  26. Synchronization Hardware • We have seen software-based solutions to the critical section problem • In general, we can state that any solution to the critical-section problem requires a simple tool – a lock • Race conditions are prevented by requiring that critical regions be protected by locks. That is, a process must acquire a lock before entering a critical section; it releases the lock when it exits the critical section

  27. test_and_set Instruction Definition: boolean test_and_set (boolean *target) { boolean rv = *target; *target = TRUE; return rv: } • Executed atomically • Returns the original value of passed parameter • Set the new value of passed parameter to “TRUE”.

  28. Solution using test_and_set() • Shared Boolean variable lock, initialized to FALSE • Solution: do { while (test_and_set(&lock)) ; /* do nothing */ /* critical section */ lock = false; /* remainder section */ } while (true); Satisfies mutual exclusion, but indefinite blocking is not prohibited

  29. compare_and_swap Instruction Definition: int compare _and_swap(int *value, int expected, int new_value) { int temp = *value; if (*value == expected) *value = new_value; return temp; } • Executed atomically • Returns the original value of passed parameter “value” • Set the variable “value” the value of the passed parameter “new_value” but only if “value” ==“expected”. That is, the swap takes place only under this condition.

  30. Solution using compare_and_swap • Shared integer “lock” initialized to 0; • Solution: do { while (compare_and_swap(&lock, 0, 1) != 0) ; /* do nothing */ /* critical section */ lock = 0; /* remainder section */ } while (true); Satisfies mutual exclusion, but indefinite blocking is not prohibited

  31. Bounded-waiting Mutual Exclusion with test_and_set do { waiting[i] = true; key = true; while (waiting[i] && key) key = test_and_set(&lock); waiting[i] = false; /* critical section */ j = (i + 1) % n; while ((j != i) && !waiting[j]) j = (j + 1) % n; if (j == i) lock = false; else waiting[j] = false; /* remainder section */ } while (true); do { waiting[j] = true; key = true; while (waiting[j] && key) key = test_and_set(&lock); waiting[j] = false; /* critical section */ i = (j + 1) % n; while ((i != j) && !waiting[i]) i = (i + 1) % n; if (i == j) lock = false; else waiting[i] = false; /* remainder section */ } while (true); • Meets all requirements; solves the critical-section problem

  32. Mutex Locks • Previous solutions are complicated and generally inaccessible to application programmers • OS designers build software tools to solve critical section problem • Simplest is mutex lock • Protect a critical section by first acquire()a lock then release()the lock • Boolean variable indicating if lock is available or not • Calls to acquire()and release()must be atomic • Usually implemented via hardware atomic instructions • But this solution requires busy waiting • This lock therefore called a spinlock

  33. acquire() and release() • acquire() { while (!available) ; /* busy wait */ available = false;; } • release() { available = true; } • do { acquire lock critical section release lock remainder section } while (true);

  34. Semaphores • The various hardware-based solutions to the critical section problem (using the TestAndSet() and Swap() instructions) are complicated for application programmers to use • To overcome this difficulty, we can use a synchronization tool called a semaphore • Dijkstra is well known as the inventor of the semaphore as the first software-oriented primitive to accomplish process synchronization [Dijkstra, 1968] • Dijkstra’s work on semaphores established over 30 years ago the foundation of modern techniques for accomplishing synchronization

  35. Semaphore • Synchronization tool that provides more sophisticated ways (than Mutex locks) for process to synchronize their activities. • Semaphore S – integer variable • Can only be accessed via two indivisible (atomic) operations • wait()and signal() • Originally called P()and V() • Definition of the wait() operation wait(S) { while (S <= 0) ; // busy wait S--; } • Definition of the signal() operation signal(S) { S++; }

  36. Semaphore Usage • Counting semaphore – integer value can range over an unrestricted domain • Binary semaphore – integer value can range only between 0 and 1 • Same as a mutex lock • Can solve various synchronization problems • Consider P1 and P2 that require S1to happen before S2 Create a semaphore “synch” initialized to 0 P1: S1; signal(synch); P2: wait(synch); S2; • Can implement a counting semaphore S as a binary semaphore

  37. Semaphore Implementation • Must guarantee that no two processes can execute the wait() and signal() on the same semaphore at the same time • Thus, the implementation becomes the critical section problem where the wait and signal code are placed in the critical section • Could now have busy waitingin critical section implementation • But implementation code is short • Little busy waiting if critical section rarely occupied • Note that applications may spend lots of time in critical sections and therefore this is not a good solution

  38. Semaphore Implementation with no Busy waiting • With each semaphore there is an associated waiting queue • Each entry in a waiting queue has two data items: • value (of type integer) • pointer to next record in the list • Two operations: • block– place the process invoking the operation on the appropriate waiting queue • wakeup– remove one of processes in the waiting queue and place it in the ready queue • typedef struct{ int value; struct process *list; } semaphore;

  39. Implementation with no Busy waiting (Cont.) wait(semaphore *S) { S->value--; if (S->value < 0) { add this process to S->list; block(); } } signal(semaphore *S) { S->value++; if (S->value <= 0) { remove a process P from S->list; wakeup(P); } }

  40. Deadlock and Starvation • Deadlock – two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes • Let S andQbe two semaphores initialized to 1 P0P1 wait(S); wait(Q); wait(Q); wait(S); ... ... signal(S); signal(Q); signal(Q); signal(S); • Starvation– indefinite blocking • A process may never be removed from the semaphore queue in which it is suspended • Priority Inversion– Scheduling problem when lower-priority process holds a lock needed by higher-priority process • Solved via priority-inheritance protocol

  41. Two Types of Semaphores • Counting semaphore – integer value can range over an unrestricted domain • Binary semaphore – integer value can range only between 0 and 1; can be simpler to implement

  42. Classical Problems of Synchronization • Bounded-Buffer Producer-Consumer Problem • Readers and Writers Problem • Dining-Philosophers Problem

  43. Bounded-Buffer Producer-Consumer Problem • Shared datasemaphore full, empty, mutex;Initially:full = 0, empty = n, mutex = 1 where n is the buffer size

  44. Bounded-Buffer Producer-Consumer Problem Producer do { … produce an item … P(empty); P(mutex); … add the item to the buffer … V(mutex); V(full); } while (TRUE); The producer must wait for an empty space in the buffer We must make sure that the producer and the consumer make changes to the shared buffer in a mutually exclusive manner

  45. Bounded-Buffer Problem Consumer Process Consumer do { P(full) P(mutex); … remove an item from the buffer … V(mutex); V(empty); … consume the item … } while (TRUE); The consumer must wait for an filled space in the buffer We must make sure that the producer and the consumer make changes to the shared buffer in a mutually exclusive manner

  46. Readers-Writers Problem • A database is to be shared among several concurrent processes. Some of these processes may want only to read the database, whereas others may want to update the database • We distinguish between these two types of processes by referring to the former as readers and to the latter as writers • Obviously, if two readers access the shared data simultaneously, nothing bad will happen • However, if a writer and some other process (either a reader or a writer) access the database simultaneously, chaos may ensue

  47. Readers-Writers Problem • To ensure that these difficulties do not arise, we require that thewriters have exclusive access to the shared database • This synchronization problem has been used to test nearly every new synchronization primitive • There are several variations of this problem, all involving priorities • The first and simplest one, referred to as the first readers-writers problem(Duh…), requires that no reader will be kept waiting unless a writer has already obtained permission to use the shared object (i.e., no reader should wait for other readers to finish simply because a writer is waiting) NOTE:writers may starve • The second readers-writers problem requires that, once a writer is ready, that writer performs its write as soon as possible (i.e., if a writer is waiting, no new readers may start reading) NOTE:readers may starve

  48. First Readers-Writers Problem • Shared datasemaphore mutex, wrt;int readcount; Initiallymutex = 1, wrt = 1, readcount = 0

  49. First Readers-Writers Problem Writer do { P(wrt); … writing is performed … V(wrt); } while (TRUE); A writer will wait if either another writer is currently writing or one or more readers are currently reading

  50. First Readers-Writers Problem A reader will wait only if a writer is currently writing. Note thatif readcount == 1, no reader is currently reading and thus that is the only time that a reader has to make sure that no writer is currently writing (i.e., ifreadcount > 1, there is at least one reader reading and thus the new reader does not have to wait Reader do{ P(mutex); readcount++; if (readcount == 1) P(wrt); V(mutex); … reading is performed … P(mutex); readcount--; if (readcount == 0) V(wrt); V(mutex); } while(TRUE); We must make sure that readers update the shared variablereadcountin a mutually exclusive manner

More Related