1 / 96

Process Synchronization

Process Synchronization. Background The Critical-Section Problem Synchronization Hardware Semaphores Classical Problems of Synchronization Critical Regions Monitors. Background. Concurrent access to shared data may result in data inconsistency.

lilia
Download Presentation

Process Synchronization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Process Synchronization Background The Critical-Section Problem Synchronization Hardware Semaphores Classical Problems of Synchronization Critical Regions Monitors

  2. Background • Concurrent access to shared data may result in data inconsistency. • Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes.

  3. Background • Shared-memory solution to bounded-buffer problem allows at most n – 1 items in buffer at the same time. • A solution, where all N buffers are used is not simple. • Suppose that we modify the producer-consumer code by adding a variable counter, initialized to 0 and incremented each time a new item is added to the buffer

  4. Bounded-Buffer (Old) • Shared data #define BUFFER_SIZE 10 typedefstruct { . . . } item; item buffer[BUFFER_SIZE]; int in = 0; int out = 0; • Solution is correct, but can only use BUFFER_SIZE-1 elements

  5. Bounded-Buffer (Old)– Producer Process item nextProduced; while (1) { while (((in + 1) % BUFFER_SIZE) == out);/* do nothing */ buffer[in] = nextProduced; in = (in + 1) % BUFFER_SIZE; }

  6. Bounded-Buffer (Old)– Consumer Process item nextConsumed; while (1) { while (in == out); /* do nothing */ nextConsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; }

  7. Bounded-Buffer (Proper) • Shared data #define BUFFER_SIZE 10 typedefstruct { . . . } item; item buffer[BUFFER_SIZE]; int in = 0; int out = 0; int counter = 0;

  8. Bounded-Buffer Producer process item nextProduced; while (1) { while (counter == BUFFER_SIZE); //spin buffer[in] = nextProduced; in = (in + 1) % BUFFER_SIZE; counter++; }

  9. Bounded-Buffer Consumer process item nextConsumed; while (1) { while (counter == 0); // do nothing nextConsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; counter--; }

  10. Bounded Buffer • The statement “count++ ” may be implemented in machine language as: register1 = counter register1 = register1 + 1 counter = register1 • The statement “count-- ” may be implemented as: register2 = counter register2 = register2 – 1 counter = register2

  11. Bounded Buffer • If both the producer and consumer attempt to update the buffer concurrently, the assembly language statements may get interleaved. • Interleaving depends upon how the producer and consumer processes are scheduled.

  12. Race Condition • Assume counter is initially 5. One interleaving of statements is:producer: register1 = counterproducer: register1 = register1 + 1 consumer: register2 = counterconsumer: register2 = register2 – 1 consumer: counter = register2 producer: counter = register1 • The correct result should be 5.

  13. Race Condition • Race condition: The situation where several processes access – and manipulate shared data concurrently. The final value of the shared data depends upon which process finishes last. • To prevent race conditions, concurrent processes must be synchronized.

  14. Disable Interrupts ? • Uniprocessor: • Disable interrupts to solve lock race condition. • Multiprocessor: • Disabling interrupts: • Send interrupt message to all processors. • Delay in message passing. • System degradation. • Solution? • Handle lock variable with atomic assignment. • Swap variable in hardware.

  15. Bounded Buffer • The statements:counter++;counter--;must be performed atomically. • Atomic operation means an operation that completes in its entirety without interruption.

  16. The Critical-Section Problem • n processes, all competing for some shared data • Each process has a code segment, called critical section, in which the shared data is accessed. • Problem: • ensure that when one process is executing in its critical section, no other process is allowed to execute in its critical section.

  17. General Structure General structure of process Pi (other process Pj) do { entry section critical section exit section remainder section } while (1); • Processes may share some common variables to synchronize their actions.

  18. Critical-Section Solution There are 3 requirements that must be met for a solution to the critical-section problem: • Mutual Exclusion • Progress • Bounded Waiting

  19. Critical-Section Solution Mutual Exclusion • If process Pi is executing in its critical section  no other processes can be executing in their critical sections.

  20. Critical-Section Solution Progress a) if no process is executing in its critical section and b) there exist some processes that wish to enter their critical section  only those processes that are not executing in the remainder section can participate in the decision on which will enter the critical section next. The selection of the processes cannot be postponed indefinitely.

  21. Critical-Section Solution Bounded Waiting • A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted. • Assume that each process executes at a non-zero speed (and completes at a non-infinite time) • No assumption concerning relative speed of the n processes.

  22. Algorithm 1 (2 processes) • Shared variables: • int turn; // initially turn = 0 • turn = i Pi can enter its critical section • Process Pi do { while (turn !=i); /* NOP */ critical section turn = j; remainder section } while (1);

  23. Algorithm 1 • Satisfies mutual exclusion (strict alternation), but not progress e.g., When P0 exits CS, sets turn to P1, and executes its remainder section for quite some time, P1 executes CS, sets turn to P0 and P1 immediately wants to executes CS again but has to wait for its turn again. • Remember that P0 is still in the remainder section and is preventing P1’s entry into the CS because it is P0 turn… • i.e., if turn = 0 and P1 is ready to enter CS, it can’t. • Recall Progress requirement: • Only processes not in remainder section can decide on who goes next. • P1 can’t enter even though P0 may be in its remainder section.

  24. Algorithm 2 • Problem with Algorithm 1: does not have info on state of processes, only remembers who is next. • Shared variables boolean flag[2]; // remember state of process flag [0] = flag [1] = false; flag [i] = true; //  Pi ready to enter its critical section do { flag[i] = true; // I am ready to enter CS while (flag[j]); /* NOP */ critical section flag [i] = false; remainder section } while (1);

  25. Algorithm 2 Satisfies mutual exclusion, but (still) not the progress requirement. Counterexample: if flag[0] = true at time t0 then, P0 gets interrupted and P1 gets CPU & sets flag[1] = true at time t1 … • … they both spin lock indefinitely.

  26. Algorithm 3 – Peterson’s Solution Combined shared variables of algorithms 1 and 2. Process Pi do { flag [i] = true; turn = j; while (flag [j] and turn = j); critical section flag [i] = false; remainder section } while (1); • Meets all three requirements; solves the critical-section problem for two processes.

  27. Bakery Algorithm (Critical section for n processes) • Before entering its critical section, process receives a number. Holder of the smallest number enters the critical section. • If processes Pi and Pj receive the same number, check process/thread numbers – if i < j, then Pi is served first; else Pj is served first. • The numbering scheme always generates numbers in increasing order of enumeration; i.e., 1,2,3,3,3,3,4,5...

  28. Bakery Algorithm • Notation < lexicographical order (ticket #, process id #) • (a,b) < (c,d) if • a < c or • a = c and b < d (tie broken with smaller pid) • max (a0,…, an-1) is a number, k, such that k  ai for i - 0, …, n – 1

  29. Bakery Algorithm • Shared data: • boolean choosing[n]; • int number[n]; • Data structures are initialized to false and 0 respectively

  30. Bakery Algorithm do { choosing[i] = true; /* Pi will be choosing (assigned) a number */ num[i] = max(num[0], num[1], …, num [n – 1])+1; choosing[i] = false; for (j = 0; j < n; j++) { while (choosing[j]); while ((num[j]!=0)&&(num[j,j]<num[i,i])); } critical section num[i] = 0; remainder section } while (1);

  31. Synchronization Hardware

  32. Synchronization Hardware • Simple in uniprocessor environment, i.e., forbid interrupts while a shared variable is modified. • Generally too inefficient on multiprocessor systems • Operating systems using this not broadly scalable • Multiprocessor environment: • Interrupt disable is infeasible, i.e., time consuming since message is passed to other processors.

  33. Synchronization Hardware • Modern machines provide special atomic hardware instructions • Atomic = non-interruptable • These can either: • test memory word and set value, or • swap contents of two memory words • They must accomplish this as one uninterruptable operation

  34. Test and Set Test and modify the content of a word atomically booleanTestAndSet(boolean *target) { booleanrv = *target; *target = true; /* blocks others until lock is set to false upon CS exit */ return rv; } Note: If two TestandSet() instructions are executed simultaneously (on separate processors), they will be executed sequentially in some arbitrary order.

  35. Mutual Exclusion with Test-and-Set • Shared data: boolean lock = false; • Process Pi do { while (TestAndSet(&lock)); /*NOP*/ critical section lock = false; remainder section }

  36. Hardware Swap • Atomically swap two variables. void swap(boolean *a, boolean *b) { boolean temp = *a; *a = *b; *b = temp; }

  37. Mutual Exclusion with Swap • Process Pi do { key = true; while (key == true) swap(lock,key); critical section lock = false; remainder section } while (true);

  38. Semaphores • Synchronization tool that does not require busy waiting. • Semaphore S – integer variable • can only be accessed via two indivisible (atomic) operations • wait (S): while S 0 do NOP;S--; • signal (S): S++;

  39. Semaphores • Counting semaphore • integer value can range over an unrestricted domain • Binary semaphore • integer value can range only between 0 and 1; can be simpler to implement. Also known as mutex locks • Can implement a counting semaphore S as a binary semaphore • Provides mutual exclusion • semaphore S; // initialized to 1 • wait (S); • Critical Section • signal (S);

  40. Semaphores • Must guarantee that no two processes can execute wait() and signal() on the same semaphore at the same time • Thus, implementation becomes the critical section problem where the wait and signal code are placed in the critical section. • Could now have busy waiting in critical section implementation • But implementation code is short • Little busy waiting if critical section rarely occupied • Note that applications may spend lots of time in critical sections and therefore this is not a good solution.

  41. Semaphores – w/o Busy Waiting • Associate a waiting queue with each semaphore. • Each entry in a waiting queue has two data items: • value (of type integer) • pointer to next record in the list • Two operations: • block – place the process invoking the operation on the appropriate • waiting queue. • wakeup – remove one of processes in the waiting queue and place • it in the ready queue.

  42. Semaphore Implementation • Define a semaphore as a record typedefstruct { int value;struct process *L; } semaphore; • Assume two simple operations: • block suspends the process that invokes it. • wakeup(P) resumes the execution of a blocked process P.

  43. Implementation • Semaphore operations now defined as wait(S): S.value--; if (S.value < 0) { add this process to S.L; block(); }

  44. Implementation • Semaphore operations now defined as signal(S): S.value++; if (S.value <= 0) { remove a process P from S.L wakeup(P); }

  45. Critical Section of n Processes • Shared data: semaphore mutex; //initially mutex = 1 • Process Pi: do { wait(mutex); critical section signal(mutex); remainder section} while (1);

  46. Semaphore as a General Synchronization Tool • Execute B in Pj only after A executed in Pi • Use semaphore flag initialized to 0 Code: PiPj   A wait(flag) signal(flag) B

  47. Deadlock and Starvation • Starvation – indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended.

  48. Deadlock and Starvation • Deadlock – two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes. Let S and Q be two semaphores initialized to 1 P0P1 wait(S); wait(Q); wait(Q); wait(S);   signal(S); signal(Q); signal(Q) signal(S);

  49. Classical Problems of Synchronization Bounded-Buffer Problem Readers and Writers Problem Dining-Philosophers Problem

  50. Bounded-Buffer Problem • Shared datasemaphore full, empty, mutex;Initially:full = 0, empty = n, mutex = 1

More Related