1 / 75

Chapter 6: Process Synchronization

Chapter 6: Process Synchronization. Module 6: Process Synchronization. Background The Critical-Section Problem Peterson ’ s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization Monitors Synchronization Examples Atomic Transactions. Key Terms. Background.

dsalinas
Download Presentation

Chapter 6: Process Synchronization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 6: Process Synchronization

  2. Module 6: Process Synchronization • Background • The Critical-Section Problem • Peterson’s Solution • Synchronization Hardware • Semaphores • Classic Problems of Synchronization • Monitors • Synchronization Examples • Atomic Transactions

  3. Key Terms

  4. Background • Concurrent access to shared data may result in data inconsistency • Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes • Suppose that we wanted to provide a solution to the consumer-producer problem that fills all the buffers. We can do so by having an integer count that keeps track of the number of full buffers. Initially, count is set to 0. It is incremented by the producer after it produces a new buffer and is decremented by the consumer after it consumes a buffer.

  5. Race Condition • An unanticipated execution ordering of concurrent flows that results in undesired behavior is called a race condition—a software defect and frequent source of vulnerabilities. • Race conditions result from runtime environments, including operating systems, that must control access to shared resources, especially through process scheduling.

  6. Race Condition Example • count++ could be implemented asregister1 = count register1 = register1 + 1 count = register1 • count-- could be implemented asregister2 = count register2 = register2 - 1 count = register2 • Consider this execution interleaving with “count = 5” initially: S0: producer execute register1 = count {register1 = 5}S1: producer execute register1 = register1 + 1 {register1 = 6} S2: consumer execute register2 = count {register2 = 5} S3: consumer execute register2 = register2 - 1 {register2 = 4} S4: producer execute count = register1 {count = 6 } S5: consumer execute count = register2 {count = 4}

  7. Critical Section • Informally, a critical section is a code segment that accesses shared variables and has to be executed as an atomic action. • The critical section problem refers to the problem of how to ensure that at most one process is executing its critical section at a given time.

  8. Example • Suppose we have to implement a function to handle withdrawals from a bank account: • Now suppose that you and your significant other share a bank account with a balance of $1000. • Then you each go to separate ATM machines and simultaneously withdraw $100 from the account.

  9. We’ll represent the situation by creating a separate thread for each person to do the withdrawals • These threads run in the same bank process: • What’s the problem with this implementation? • Think about potential schedules of these two threads

  10. The problem is that the execution of the two threads can be interleaved: • What is the balance of the account now? • This is known as a race condition • Each thread is “racing” to put_balance() before the other

  11. Solution to Critical-Section Problem • Mutual Exclusion - If process Pi is executing in its critical section, then no other processes can be executing in their critical sections • Only one process can be in the critical section at a time -- otherwise what critical section?. • Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely • No process is forced to wait for an available resource -- otherwise very wasteful. 3. Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted • Assume that each process executes at a nonzero speed • No assumption concerning relative speed of the N processes

  12. Mutual Exclusion • One way to ensure who wins the race is to only let one thread “compete”; this is called mutual exclusion • Code that uses mutual exclusion to synchronize its execution is called a critical section • Only one thread at a time can execute in the critical section • All other threads are forced to wait on entry • When a thread leaves a critical section, another can enter

  13. Lock • To implement critical section, we need to use acquire lock and release lock • Lock is an object in memory providing two operation • Acquire() : before entering critical section => getting the lock • Release() : after finishing operation in critical section => return the lock • While one thread grab the lock, the other thread have to spin, while waiting

  14. Between acquire() and release(), the thread holds critical section • What happened if two operation are not paired ?

  15. How the lock works • Blue thread grab the lock, while the other two threads keep spinning, waiting for the lock • When they do spinning, they enter busy waiting state

  16. The other two threads are put in a queue, one after the other • This to determine who will get the lock after the blue thread

  17. The other twos, while waiting in the queue, keeps on spinning • Once a while, they both keep on doing checking , whether blue thread has release the lock • The spinning and the test consume CPU operation

  18. In the program

  19. Synchronization Hardware • The implementation of acquire/release must be atomic • An atomic operation is one which executes as though it could not be interrupted • Code that executes “all or nothing” or uninterruptable • How do we make them atomic? • Need help from hardware • Hardware solution : • Atomic instructions • Disable/enable interrupts (prevents context switches)

  20. Atomic Instruction • What about using a binary “lock” variable in memory and having processes check it and set it before entry to critical regions? • Many computers have some limited hardware support for setting locks • “Atomic” Test and Set Lock instruction • “Atomic” compare and swap operation

  21. Synchron. CSE 471 Test-and-set • Lock is stored in a memory location that contains 0 or 1 • Test-and-set (attempt to acquire) writes a 1 and returns the value in memory • If the value is 0, the process gets the lock; if the value is 1 another process has the lock. • To release, just clear (set to 0) the memory location.

  22. TestAndSet int TestAndSet(int *old_ptr, int new) { // fetch old value at old_ptr int old = *old_ptr; // store ’new’ into old_ptr *old_ptr = new; // return the old value return old; }

  23. typedef struct __lock_t { int flag; } lock_t; void init (lock_t *lock) { // 0 indicates that lock is available, 1 that it is held lock->flag = 0; } void lock (lock_t *lock) { while (TestAndSet(&lock->flag, 1) == 1) // spin-wait (do nothing) ; } void unlock (lock_t *lock) { lock->flag = 0; }

  24. Problem with Lock • The problem with this implementation is busy-waiting. • Busy waiting wastes CPU cycles • Longer the CS, the longer the spin • Greater the chance for lock holder to be interrupted • If a thread is spinning on a lock, then the thread holding the lock cannot make progress

  25. How did the lock holder give up the CPU in the first place? • Lock holder calls yield or sleep • Involuntary context switch • Only want to use spinlocks as primitives to build higher-level synchronization constructs

  26. Disabling Interrupt • Uniprocessors – could disable interrupts • Currently running code would execute without preemption • Generally too inefficient on multiprocessor systems • Operating systems using this not broadly scalable

  27. Disabling interrupts blocks notification of external events that could trigger a context switch (e.g., timer)

  28. Solution

  29. Implementing Lock by Disabling Interrupt

  30. Classical Problems of Synchronization • Bounded-Buffer Problem • Readers and Writers Problem • Dining-Philosophers Problem

  31. The Producer - Consumer Problem • Producer pushes items into the buffer. • Consumer pulls items from the buffer. • Producer needs to wait when buffer is full. • Consumer needs to wait when the buffer is empty.

  32. Potential Problem • Detecting when the buffer full or empty • The buffer size has to be the same for both producer and customer

  33. Readers-Writers Problem • A data set is shared among a number of concurrent processes • Readers – only read the data set; they do not perform any updates • Writers – can both read and write. • Allow multiple readers to read at the same time. Only one single writer can access the shared data at the same time

  34. Multiple reader or one writer can enter the database

  35. Potential Problem • How to allow multiple reader or single writer for the shared data

  36. Dining-Philosophers Problem

  37. N philosopher sitting in a circular table • One chopstick is placed between each philosopher • The philosopher has two state : eating and thinking • To enter eating state, a philosopher must grab two chopstick (both left and right chopstick ) one by one • After satisfy, a philosopher must put down two chopstick (both left and right chopstick ) one by one • The philosopher enter thinking state

  38. Potential Problem • Every philosopher has access to rice bowl => no starvation • Two philosopher sit next to each other may grab the same chopstick => cause deadlock

  39. Real Example of Synchronization Problem • Producer-consumer • Audio-Video player: network and display threads; shared buffer • Reader-writer • Banking system: • read account balances versus update

  40. Dining Philosophers • Cooperating processes that need to share limited resources • Set of processes that need to lock multiple resources • Disk and tape (backup) • Travel reservation: • hotel, airline, car rental databases

  41. Semaphore • Semaphore is used to detect free resources, so process don’t need to do busy waiting • Synchronization tool that does not require busy waiting • Busy waiting means the repeating while loop , to check mutex condition => cause process in the run queue • It can not wake up to see the condition false • Semaphore S – integer variable • Two standard operations modify S: wait() and signal() • Originally called P() andV() • Less complicated • Note : Semaphore only able to detect if there is free resource. There is no mechanism to choose which resource for which process

  42. Semaphore as General Synchronization Tool • Counting semaphore – integer value can range over an unrestricted domain • Binary semaphore – integer value can range only between 0 and 1; can be simpler to implement • Also known as mutex locks • Can implement a counting semaphore S as a binary semaphore • Provides mutual exclusion • Semaphore S; // initialized to 1 • wait (S); Critical Section signal (S);

  43. Counting Semaphore • Can only be accessed via two indivisible (atomic) operations • wait (S) { while S <= 0 /* as long as S less than 0, don’t wait anymore ; // no-op //we wait, while other process enter critical zone S-- } • signal (S) { //after other finishing critical zone, we able to use resource S++; }

  44. Semaphore Implementation • Must guarantee that no two processes can execute wait () and signal () on the same semaphore at the same time • Thus, implementation becomes the critical section problem where the wait and signal code are placed in the crtical section. • Could now have busy waiting in critical section implementation • But implementation code is short • Little busy waiting if critical section rarely occupied • Note that applications may spend lots of time in critical sections and therefore this is not a good solution.

  45. Semaphore Implementation with no Busy waiting • With each semaphore there is an associated waiting queue. Each entry in a waiting queue has two data items: • value (of type integer) • pointer to next record in the list • Two operations: • block – place the process invoking the operation on the appropriate waiting queue. => implementation of wait() • wakeup – remove one of processes in the waiting queue and place it in the ready queue => implementation of signal() • http://williamstallings.com/OS-Animation/Queensland/SEMA.SWF

  46. Semaphore Implementation with no Busy waiting(Cont.) • Implementation of wait: wait (S){ value--; if (value < 0) { //add this process to waiting queue block(); } } • Implementation of signal: Signal (S){ value++; if (value <= 0) { remove a process P from the waiting queue wakeup(P); } }

  47. Deadlock and Starvation • Deadlock – two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes • Let S and Q be two semaphores initialized to 1 P0P1 wait (S); wait (Q); wait (Q); wait (S); . . . . . . signal (S); signal (Q); signal (Q); signal (S); • Starvation – indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended.

  48. Solving Bounded-Buffer Problem with semaphore • The buffer size must remain consistent for both producer and customer • While writing data (either producing or consuming), you must make sure that no thread change the data • When finish operation, tell the other (producer/customer) the number of empty slot / number of full slot • Shared data : buffer size, with N capacity • Semaphore : • Binary Semaphore mutex initialized to the value 1 • Counting Semaphore full initialized to the value 0 • Counting Semaphore empty initialized to the value N. • http://williamstallings.com/OS-Animation/Queensland/BB.SWF

More Related