# Mutual Exclusion Using Atomic Registers - PowerPoint PPT Presentation  Download Presentation Mutual Exclusion Using Atomic Registers

Mutual Exclusion Using Atomic Registers
Download Presentation ## Mutual Exclusion Using Atomic Registers

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
##### Presentation Transcript

1. Mutual Exclusion Using Atomic Registers Lecturer: Netanel Dahan Instructor: Prof. Yehuda Afek B.Sc. Seminar on Distributed Computation Tel-Aviv University 11.03.07 Based on the book ‘Synchronization Algorithms and Concurrent Programming’ by Gadi Taubenfeld

2. Overview • Introduction. • Algorithms for two processes: Peterson’s and Kessels’ algorithms. • Tournament algorithms. • Lamport’s fast algorithm. • Starvation free algorithms: the bakery algorithm and the black-white bakery version. • Tight space bounds: lower and upper bounds of shared resources.

3. Introduction

4. The Mutual Exclusion Problem The mutual exclusion problem is the guarantee of mutually exclusive access to a shared resource, or resources when there are several competing processes. A situation as described above, where several processes may access the same resource and the final result depends on who runs when, is called a race condition, and the problem is essentially avoiding such conditions. The problem was first introduced by Edsgar W. Dijkstra in 1965.

5. General Solution In order to solve the problem, we add the entry and exit code, in a way which guarantees that the mutual exclusionand deadlock freedom properties are satisfied. remainder code Surprise, surprise: the rest of the code. entry code The part of the code in which the shared resources reside. critical section exit code

6. Assumptions • The remainder code may not influence other processes. • Shared objects appearing in the entry or exit code may not be referred to in the remainder or critical section. • A process can not fail when not in the remainder. • Once a process starts executing the CS and exit code, it always finishes them.

7. Related Concepts • Mutual Exclusion: No two processes are in their critical section at the same time. • Deadlock Freedom: If a process is trying to enter its critical section, then some process, not necessarily the same one, will eventually enter the critical section. • Starvation Freedom: If a process is trying to enter its critical section, eventually it will succeed.

8. Algorithms for two processes

9. Algorithms for two processes We start with describing two algorithms that solve the mutual exclusion problem for two processes. They will be used for introducing the problem and possible solutions using atomic registers. Throughout the presentation, it shall be known that the only atomic operations on shared registers are reads and writes. In addition, we will use the statement awaitcondition as an abbreviation for while !conditiondoskip.

10. Peterson’s Algorithm • Developed by Gary L. Peterson in 1981. • The algorithm makes use of a register called turn, which can take the values 0 and 1, the identifiers for the two possible processes, and two boolean registers b and b. • Both processes can read and write to turn, read b and b, but only process i can write to b[i ].

11. Process 0: b := true; turn := 0; await (b = false or turn = 1) critical section; b := false; Process 1: b := true; turn := 1; await (b = false or turn = 0) critical section; b := false; Peterson’s Algorithm Initially: b = b = false, turn is immaterial.

12. Peterson’s Algorithm Check if there is contention. If not, I can go in the CS. Else… Check if I crossed the barrier first. If so I can go in the CS, else I have to wait. • Process i: • b[i] := true; • turn := i; • await (b[1-i] = false or turn = 1-i) • critical section; • b[i] := false; Cross the turn barrier and indicate the crossing for later observation. Indicate I am in contention for the critical section. Do my thang… Indicate I am not contending anymore.

13. Properties • Satisfies mutual exclusion and starvation freedom. • Contention-free time complexity is four accesses to the shared memory. • Process time complexity is unbounded. • Three shared registers are used.

14. Kessels’ single-writer algorithm • A variation of peterson’s algorithm which uses single-writer registers. • Uses 2 registers, which can take the values 0 and 1, and 2 boolean registers. • Developed by J. L. W. Kessels in 1982.

15. Kessels’ single-writer algorithm • Process 0: • b := true; • local := turn; • turn := local; • await (b = falseorlocal ≠ turn); • critical section; • b := false; Initially: b = b = false, turn and turn are immaterial. Only process i can write to b[i] and turn[i]. local[i] is local for process i. • Process 1: • b := true; • local := 1 - turn; • turn := local; • await (b = falseorlocal = turn); • critical section; • b := false;

16. Properties • Same as Peterson’s algorithm, besides the use of 4 shared registers. • In addition, satisfies local spinning. ?

17. Local Spinning • Accessing a physically remote register is costly. • When a process waits, using an await statement, it does so by spinning (busy-waiting) on registers. • It is much more efficient to spin on a locally-accessible registers.

18. Kessels’ single-writer algorithm • Process 0: • b := true; • local := turn; • turn := local; • await (b = falseorlocal ≠ turn); • critical section; • b := false; Initially: b = b = false, turn and turn are immaterial. Only process i can write to turn[i]. local[i] is local for process i. • Process 1: • b := true; • local := 1 - turn; • turn := local; • await (b = falseorlocal = turn); • critical section; • b := false;

19. Tournament Algorithms

20. Tournament Algorithms • A generalization method which enables the construction of an algorithm for n processes from any given solution for 2 processes. • Developed by Gary L. Peterson and Michael J. Fischer in 1977.

21. Tournament Algorithms

22. Tournament Algorithms • An important side affect is that a process may enter the critical section an arbitrary number of times before some other process in a different subtree.

23. Lamport’s Fast Algorithm

24. Lamport’s Fast algorithm • An algorithm for n processes. • Provides fast access to the critical section in the absence of contention. • Uses 2 registers which are long enough to store a process’ identifier, and a boolean registers array. • Developed in 1987 by Lamport.

25. Lamport’s Fast algorithm • Process i’s program: • start: b[i] := true; • x := i; • ify ≠ 0 thenb[i] := false; • awaity = 0; • gotostartfi; • y := i; • ifx ≠ ithenb[i] := false; • forj := 1 to n doawait !b[j] od; • ify ≠ ithenawaity = 0; • goto startfi fi; • critical section; • y := 0; • b[i] := false;

26. Indicate contending b[i] := true Contention? y≠0? yes Wait until CS is released no The last to cross the barrier! Barrier y := i Continue only after it is guaranteed that no one can cross the barrier Contention? x i ? yes no Last to cross the barrier? y = i ? yes critical section no exit code Wait until CS is released

27. Properties • Satisfies mutual exclusion and deadlock freedom. • Starvation of individual processes is possible. • Fast access: In the absence of contention, only 7 accesses to the shared memory are required. • Process time complexity is unbounded. • n + 2 shared registers are used.

28. Starvation Free Algorithms

29. Starvation Free Algorithms • In many practical systems, since contention is rare deadlock freedom is a sufficient property. • For other systems, it might be a too weak requirement, such as in cases where a process stays a long time in the critical section.

30. The Bakery Algorithm • Based on the same policy as in a bakery, where each customer gets a number which is larger then the numbers waiting in line, and the lowest number holder gets served. • Assumed to be up to n processes contending to enter the CS. • Each process is identified by a unique number from {1…n}.

31. The Bakery Algorithm • The algorithm makes use of a boolean array choosing[1…n] and an integer array number[1…n]. Entries choosing[i] and number[i] can be read by all processes but written only by process i. • The relation <, is used on pairs of integers and is called the lexicographic order relation. It is defined by (a, b) < (c, d) if a < c or if a = c and b < d.

32. The Bakery Algorithm Initially: all entries in number and choosing are 0 and false respectively. • process i’s program: • chossing[ i ] := true; • number[ i ] := 1 + maximum(number,…,number[n]); • choosing[ i ] := false; • forj = 1 to ndo • awaitchoosing[ j ] = false; • await (number[ j ] = 0 or (number[ i ], i) < (number[ j ], j)) • od; • critical section; • number[ i ] := 0;

33. Properties • Satisfies mutual exclusion and first-come-first-served. • The algorithm is not fast: even in the absence of contention a process is required to access the shared memory 3(n-1) times.

34. Properties • Uses 2n shared registers. • Non-atomic registers: it is enough to assume that the registers are safe, meaning that writes which are concurrent with reads will return an arbitrary value. • The size of number[i] is unbounded.

35. The Bakery Algorithm

36. The Black-White Bakery Algorithm • A variant of the bakery algorithm developed by Gadi Taubenfeld in 2004. • By using a single additional shared bit the amount of space required is bounded. • The shared bit represents a color for the customer’s tickets, while the idea is that there is a priority to the holders of a ticket which color is different then the shared bit.

37. The Black-White Bakery Algorithm

38. Tight Space Bounds

39. Tight Space Bounds We show that for n processes, n shared bits are necessary and sufficient for solving the mutual exclusion problem assuming the only atomic operations are reads and writes, and the processes are asynchronous.

40. A Lower Bound • Any deadlock free mutual exclusion algorithm for n processes must use at least n shared registers. • Proved by James E. Burns and Nancy A. Lynch in 1980.

41. Definitions • Event: an action carried by a specific process. • x, y and z will denote runs. • When x is a prefix of y, (y–x) denotes the suffix of y obtained by removing x.

42. Definitions • x; y is an extension of x by y. • We always know where (remainder, entry, CS, exit) a process is. • If a run involves only process p, then all events in the run involve only process p.

43. Definitions • Run xlooks like run y to process p. • Process p is hidden in run x. • Process pcovers register r in run x.

44. Illustrations • Run xlooks like run y to process p. • run x • p reads 5 from r1 • q writes 6 to r1 • p writes 7 to r1 • q writes 8 to r1 • p reads 8 from r1 • run y • p reads 5 from r1 • p writes 7 to r1 • q writes 6 to r1 • q reads 6 from r1 • q writes 8 to r1 • p reads 8 from r1 • q writes 6 to r1

45. Illustrations • Process p is hidden in run x. • p reads 5 from r1 • q reads 5 from r1 • p writes 7 to r1 • q writes 8 to r1 • p reads 8 from r1 • q writes 6 to r1

46. Illustrations • Process pcovers register r in run x. p covers r1 at this point • p writes 7 to r1 • q writes 8 to r1 • p reads 8 from r1 • p writes 2 to r1

47. Lemma 1 • Let x be a run which looks like run y to every process in a set P. if z is an extension of x which involves only processes in P then y ; (z–x) is a run. x y P events only then, this is also a run z

48. Lemma 2 • If a process p is in its CS in run z, then p is not hidden in z. p is in its critical section then, p is not hidden z

49. Lemma 3 • Let x be a run in which all the processes are hidden. Then, for any process p, there exists a run y which looks likex to p, where all processes except maybe p are in their remainders. • Proof: by induction on the number of steps of processes other then p.

50. Lemma 4 • Let x be a run where all the processes are hidden. Then, for any process p, there is an extension z of x which involves only p in which pcovers some register that is not covered by any other process.