1 / 41

Process Synchronization (Or The “Joys” of Concurrent Programming)

This article provides an overview of process synchronization in concurrent programming, including the critical-section problem, Peterson's solution, semaphores, and classic problems of synchronization. It also discusses the concepts of race conditions, mutual exclusion, progress, and bounded waiting.

sbravo
Download Presentation

Process Synchronization (Or The “Joys” of Concurrent Programming)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Process Synchronization(Or The “Joys” of Concurrent Programming)

  2. Overview: Process Synchronization • Background • The Critical-Section Problem • Peterson’s Solution • Semaphores • Classic Problems of Synchronization

  3. Background • Fact of Life 1: Concurrent access to shared data may result in data inconsistency • Fact of Life 2: Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes • Example?

  4. Producer- Consumer Example Producer Consumer

  5. Race Condition • count++ could be implemented asregister1 = count register1 = register1 + 1 count = register1 • count-- could be implemented asregister2 = count register2 = register2 - 1 count = register2 • Possible execution (with “count = 5” initially): S0: producer executes register1 = count {register1 = 5}S1: producer executes register1 = register1 + 1 {register1 = 6} S2: consumer executes register2 = count {register2 = 5} S3: consumer executes register2 = register2 - 1 {register2 = 4} S4: producer executes count = register1 {count = 6 } S5: consumer executes count = register2 {count = 4}

  6. Solution to Critical-Section Problem 1. Mutual Exclusion - If process Pi is executing in its critical section, then no other processes can be executing in their critical sections 2. Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely 3. Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted • Assume that each process executes at a nonzero speed • No assumption concerning relative speed of the N processes

  7. Critical-Section Problem • Race Condition - When there is concurrent access to shared data and the final outcome depends upon order of execution. • Critical Section - Section of code where shared data is accessed. • Entry Section - Code that requests permission to enter its critical section. • Exit Section - Code that is run after exiting the critical section

  8. Structure of a Typical Process

  9. Peterson’s Solution • Two process solution • Assume that the LOAD and STORE instructions are atomic; that is, cannot be interrupted. • The two processes share two variables: • int turn; • Boolean flag[2] • The variable turn indicates whose turn it is to enter the critical section. • The flag array is used to indicate if a process is ready to enter the critical section. flag[i] = true implies that process Pi is ready!

  10. Algorithm for Process Pi Legenda: j is the index of the other process. Does this work? Why? Can it be made simpler?

  11. Historical Aside: Dekker’s Algorithm Why does it work?

  12. Aside: How Many Shared Variables? Peterson’s mutex algorithm for two processes uses two boolean variables and one integer variable. How many variables does one need in order to achieve deadlock-free mutex? Theorem (James Burns and Nancy Lynch, 1980) N binary variables are necessary and sufficient to achieve deadlock-free mutual exclusion amongst N processes. Question: Is this good news? But....one shared register is enough under timing assumptions! See Michael Fischer’s classic algorithm.

  13. Aside: Fischer’s Algorithm Delay is chosen to be larger than the longest time it takes to execute an instruction. (Simulate the Uppaal demo of Fischer’s algorithm!) ! End of aside!

  14. Critical Section Using Locks

  15. Semaphore (Dijkstra) • Synchronization tool that does not require busy waiting • Semaphore S – integer variable • Two standard operations modify S: acquire() and release() • Originally called P() (Proberen) andV() (Verhogen) • Can only be accessed via two indivisible (atomic) operations

  16. Semaphore as General Synchronization Tool • Counting semaphore – integer value can range over an unrestricted domain • Binary semaphore – integer value can range only between 0 and 1 • Also known as mutex locks

  17. Semaphore Implementation with no Busy waiting • With each semaphore there is an associated waiting queue and a value (of type integer). • Two operations: • block – place the process invoking the operation on the appropriate waiting queue. • wakeup – remove one of processes in the waiting queue and place it in the ready queue.

  18. Semaphore Implementation with no Busy waiting(Cont.) • Implementation of acquire(): • Implementation of release(): So, is the world of concurrency nice and easy?

  19. Deadlock and Starvation  • Deadlock – two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes • Let S and Q be two binary semaphores. P0P1 S.acquire(); Q.acquire(); Q.acquire(); S.acquire(); . . . . . . S.release(); Q.release(); Q.release(); S.release(); • Starvation – indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended.

  20. Classical Problems of Synchronization • Bounded-Buffer Problem • Readers and Writers Problem • Dining-Philosophers Problem

  21. Bounded-Buffer Problem • N buffers, each can hold one item • Semaphore mutex initialized to the value 1 • Semaphore full initialized to the value 0 • Semaphore empty initialized to the value N.

  22. Bounded-Buffer Problem

  23. Bounded-Buffer Problem

  24. Bounded Buffer Problem (Cont.) • The structure of the producer process

  25. Bounded Buffer Problem (Cont.) • The structure of the consumer process

  26. Bounded Buffer Problem (Cont.) • The Factory

  27. Readers-Writers Problem • A data set is shared among a number of concurrent processes • Readers – only read the data set; they do not perform any updates • Writers – can both read and write. • Problem – allow multiple readers to read at the same time. Only one writer can access the shared data at the same time. • Shared Data • Data set • Semaphore mutex initialized to 1. (Ensures mutex when readerCount is updated.) • Semaphore db initialized to 1. (Mutex for writers, and prevents writers from entering if db is being read.) • Integer readerCount initialized to 0.

  28. Readers-Writers Problem Interface for read-write locks How would you implement acquireReadLock and releaseReadLock?

  29. Readers-Writers Problem Methods called by writers.

  30. Readers-Writers Problem • The structure of a writer process

  31. Readers-Writers Problem • The structure of a reader process

  32. Dining-Philosophers Problem (Dijkstra) • Shared data • Bowl of rice (data set) • Semaphore chopStick [5] initialized to 1

  33. Dining-Philosophers Problem (Cont.) • The structure of Philosopher i: Does this work?

  34. Problems with Semaphores • Correct use of semaphore operations: • mutex.acquire() …. mutex.release() • mutex.wait() … mutex.wait() • Omitting of mutex.wait () or mutex.release() (or both)

  35. Monitors (Brinch-Hansen, Hoare) • A high-level abstraction that provides a convenient and effective mechanism for process synchronization • Key property: Only one process may be active within the monitor at a time

  36. Syntax of a Monitor

  37. Schematic view of a Monitor

  38. Condition Variables • Condition x, y; • Two operations on a condition variable: • x.wait () – a process that invokes the operation is suspended. • x.signal () –resumes one of the processes(if any)that invoked x.wait ()

  39. Monitor with Condition Variables

  40. Solution to Dining Philosophers

  41. Solution to Dining Philosophers (cont) • Each philosopher invokes theoperations takeForks(i)and returnForks(i) in the following sequence: dp.takeForks (i) EAT dp.returnForks (i)

More Related