1 / 61

# Overview - PowerPoint PPT Presentation

Mutual Exclusion Using Atomic Registers Lecturer: Netanel Dahan Instructor: Prof. Yehuda Afek B.Sc. Seminar on Distributed Computation Tel-Aviv University 11.03.07 Based on the book ‘Synchronization Algorithms and Concurrent Programming’ by Gadi Taubenfeld Overview Introduction.

Related searches for Overview

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.

## PowerPoint Slideshow about 'Overview' - jana

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

Lecturer: Netanel Dahan

Instructor: Prof. Yehuda Afek

B.Sc. Seminar on Distributed Computation

Tel-Aviv University 11.03.07

Based on the book ‘Synchronization Algorithms and Concurrent Programming’ by Gadi Taubenfeld

• Introduction.

• Algorithms for two processes: Peterson’s and Kessels’ algorithms.

• Tournament algorithms.

• Lamport’s fast algorithm.

• Starvation free algorithms: the bakery algorithm and the black-white bakery version.

• Tight space bounds: lower and upper bounds of shared resources.

The mutual exclusion problem is the guarantee of mutually exclusive access to a shared resource, or resources when there are several competing processes.

A situation as described above, where several processes may access the same resource and the final result depends on who runs when, is called a race condition, and the problem is essentially avoiding such conditions.

The problem was first introduced by Edsgar W. Dijkstra in 1965.

In order to solve the problem, we add the entry and exit code, in a way which guarantees that the mutual exclusionand deadlock freedom properties are satisfied.

remainder code

Surprise, surprise: the rest of the code.

entry code

The part of the code in which the shared resources reside.

critical section

exit code

• The remainder code may not influence other processes.

• Shared objects appearing in the entry or exit code may not be referred to in the remainder or critical section.

• A process can not fail when not in the remainder.

• Once a process starts executing the CS and exit code, it always finishes them.

• Mutual Exclusion: No two processes are in their critical section at the same time.

• Deadlock Freedom: If a process is trying to enter its critical section, then some process, not necessarily the same one, will eventually enter the critical section.

• Starvation Freedom: If a process is trying to enter its critical section, eventually it will succeed.

We start with describing two algorithms that solve the mutual exclusion problem for two processes.

They will be used for introducing the problem and possible solutions using atomic registers.

Throughout the presentation, it shall be known that the only atomic operations on shared registers are reads and writes.

In addition, we will use the statement awaitcondition as an abbreviation for while !conditiondoskip.

Peterson’s Algorithm

• Developed by Gary L. Peterson in 1981.

• The algorithm makes use of a register called turn, which can take the values 0 and 1, the identifiers for the two possible processes, and two boolean registers b[0] and b[1].

• Both processes can read and write to turn, read b[0] and b[1], but only process i can write to b[i ].

b[0] := true;

turn := 0;

await (b[1] = false or turn = 1)

critical section;

b[0] := false;

Process 1:

b[1] := true;

turn := 1;

await (b[0] = false or turn = 0)

critical section;

b[1] := false;

Peterson’s Algorithm

Initially: b[0] = b[1] = false, turn is immaterial.

Peterson’s Algorithm

Check if there is contention.

If not, I can go in the CS.

Else…

Check if I crossed the barrier first. If so I can go in the CS, else I have to wait.

• Process i:

• b[i] := true;

• turn := i;

• await (b[1-i] = false or turn = 1-i)

• critical section;

• b[i] := false;

Cross the turn barrier and indicate the crossing for later observation.

Indicate I am in contention for the critical section.

Do my thang…

Indicate I am not contending anymore.

• Satisfies mutual exclusion and starvation freedom.

• Contention-free time complexity is four accesses to the shared memory.

• Process time complexity is unbounded.

• Three shared registers are used.

Kessels’ single-writer algorithm

• A variation of peterson’s algorithm which uses single-writer registers.

• Uses 2 registers, which can take the values 0 and 1, and 2 boolean registers.

• Developed by J. L. W. Kessels in 1982.

Kessels’ single-writer algorithm

• Process 0:

• b[0] := true;

• local[0] := turn[1];

• turn[0] := local[0];

• await (b[1] = falseorlocal[0] ≠ turn[1]);

• critical section;

• b[0] := false;

Initially: b[0] = b[1] = false, turn[0] and turn[1] are immaterial.

Only process i can write to b[i] and turn[i]. local[i] is local for process i.

• Process 1:

• b[1] := true;

• local[1] := 1 - turn[0];

• turn[1] := local[1];

• await (b[0] = falseorlocal[1] = turn[0]);

• critical section;

• b[1] := false;

• Same as Peterson’s algorithm, besides the use of 4 shared registers.

• In addition, satisfies local spinning.

?

• Accessing a physically remote register is costly.

• When a process waits, using an await statement, it does so by spinning (busy-waiting) on registers.

• It is much more efficient to spin on a locally-accessible registers.

Kessels’ single-writer algorithm

• Process 0:

• b[0] := true;

• local[0] := turn[1];

• turn[0] := local[0];

• await (b[1] = falseorlocal[0] ≠ turn[1]);

• critical section;

• b[0] := false;

Initially: b[0] = b[1] = false, turn[0] and turn[1] are immaterial.

Only process i can write to turn[i]. local[i] is local for process i.

• Process 1:

• b[1] := true;

• local[1] := 1 - turn[0];

• turn[1] := local[1];

• await (b[0] = falseorlocal[1] = turn[0]);

• critical section;

• b[1] := false;

• A generalization method which enables the construction of an algorithm for n processes from any given solution for 2 processes.

• Developed by Gary L. Peterson and Michael J. Fischer in 1977.

• An important side affect is that a process may enter the critical section an arbitrary number of times before some other process in a different subtree.

Lamport’s Fast Algorithm

Lamport’s Fast algorithm

• An algorithm for n processes.

• Provides fast access to the critical section in the absence of contention.

• Uses 2 registers which are long enough to store a process’ identifier, and a boolean registers array.

• Developed in 1987 by Lamport.

Lamport’s Fast algorithm

• Process i’s program:

• start: b[i] := true;

• x := i;

• ify ≠ 0 thenb[i] := false;

• awaity = 0;

• gotostartfi;

• y := i;

• ifx ≠ ithenb[i] := false;

• forj := 1 to n doawait !b[j] od;

• ify ≠ ithenawaity = 0;

• goto startfi fi;

• critical section;

• y := 0;

• b[i] := false;

b[i] := true

Contention?

y≠0?

yes

Wait until CS is released

no

The last to cross

the barrier!

Barrier

y := i

Continue only after it is

guaranteed that no one can

cross the barrier

Contention?

x i ?

yes

no

Last to cross the barrier?

y = i ?

yes

critical section

no

exit code

Wait until CS is released

• Satisfies mutual exclusion and deadlock freedom.

• Starvation of individual processes is possible.

• Fast access: In the absence of contention, only 7 accesses to the shared memory are required.

• Process time complexity is unbounded.

• n + 2 shared registers are used.

• In many practical systems, since contention is rare deadlock freedom is a sufficient property.

• For other systems, it might be a too weak requirement, such as in cases where a process stays a long time in the critical section.

• Based on the same policy as in a bakery, where each customer gets a number which is larger then the numbers waiting in line, and the lowest number holder gets served.

• Assumed to be up to n processes contending to enter the CS.

• Each process is identified by a unique number from {1…n}.

• The algorithm makes use of a boolean array choosing[1…n] and an integer array number[1…n]. Entries choosing[i] and number[i] can be read by all processes but written only by process i.

• The relation <, is used on pairs of integers and is called the lexicographic order relation. It is defined by (a, b) < (c, d) if a < c or if a = c and b < d.

Initially: all entries in number and choosing are 0 and false respectively.

• process i’s program:

• chossing[ i ] := true;

• number[ i ] := 1 + maximum(number[1],…,number[n]);

• choosing[ i ] := false;

• forj = 1 to ndo

• awaitchoosing[ j ] = false;

• await (number[ j ] = 0 or (number[ i ], i) < (number[ j ], j))

• od;

• critical section;

• number[ i ] := 0;

• Satisfies mutual exclusion and first-come-first-served.

• The algorithm is not fast: even in the absence of contention a process is required to access the shared memory 3(n-1) times.

• Uses 2n shared registers.

• Non-atomic registers: it is enough to assume that the registers are safe, meaning that writes which are concurrent with reads will return an arbitrary value.

• The size of number[i] is unbounded.

• A variant of the bakery algorithm developed by Gadi Taubenfeld in 2004.

• By using a single additional shared bit the amount of space required is bounded.

• The shared bit represents a color for the customer’s tickets, while the idea is that there is a priority to the holders of a ticket which color is different then the shared bit.

We show that for n processes, n shared

bits are necessary and sufficient for

solving the mutual exclusion problem

assuming the only atomic operations are

reads and writes, and the processes are

asynchronous.

• Any deadlock free mutual exclusion algorithm for n processes must use at least n shared registers.

• Proved by James E. Burns and Nancy A. Lynch in 1980.

• Event: an action carried by a specific process.

• x, y and z will denote runs.

• When x is a prefix of y, (y–x) denotes the suffix of y obtained by removing x.

• x; y is an extension of x by y.

• We always know where (remainder, entry, CS, exit) a process is.

• If a run involves only process p, then all events in the run involve only process p.

• Run xlooks like run y to process p.

• Process p is hidden in run x.

• Process pcovers register r in run x.

• Run xlooks like run y to process p.

• run x

• p reads 5 from r1

• q writes 6 to r1

• p writes 7 to r1

• q writes 8 to r1

• p reads 8 from r1

• run y

• p reads 5 from r1

• p writes 7 to r1

• q writes 6 to r1

• q reads 6 from r1

• q writes 8 to r1

• p reads 8 from r1

• q writes 6 to r1

• Process p is hidden in run x.

• p reads 5 from r1

• q reads 5 from r1

• p writes 7 to r1

• q writes 8 to r1

• p reads 8 from r1

• q writes 6 to r1

• Process pcovers register r in run x.

p covers r1 at this point

• p writes 7 to r1

• q writes 8 to r1

• p reads 8 from r1

• p writes 2 to r1

• Let x be a run which looks like run y to every process in a set P. if z is an extension of x which involves only processes in P then y ; (z–x) is a run.

x

y

P events only

then, this is also a run

z

• If a process p is in its CS in run z, then p is not hidden in z.

p is in its critical section

then, p is not hidden

z

• Let x be a run in which all the processes are hidden. Then, for any process p, there exists a run y which looks likex to p, where all processes except maybe p are in their remainders.

• Proof: by induction on the number of steps of processes other then p.

• Let x be a run where all the processes are hidden. Then, for any process p, there is an extension z of x which involves only p in which pcovers some register that is not covered by any other process.

• From lemma 3, there exists a run y which looks likex to p, where all processes except maybe p are in their remainders. By the deadlock freedom property, starting from y process p is able to enter the CS on its own.

• By lemma 1, since ylooks like x to p, p should be able to do the same starting from x.

• Suppose p only writes registers covered by other processes before entering the CS.

• Then, when all covered registers are written one after the other we get a run in which p is hidden and is in its CS.

• By lemma 2, this is not possible.

• Let x be a run in which all the processes are in their remainders. Then, for every set of processes P there is an extension z of x which involves only processes in P, in which the processes in P are hidden and cover |P| distinct registers.

• Proof: by induction on the size of P.

• There is a deadlock free mutual exclusion algorithm for n processes which uses n shared bits.

• As a prove we will see the One-Bit Algorithm developed independently by J. E. Burns (1981) and L. Lamport (1986).

• Up to n processes may be contending to enter the CS, each with a unique identifier from {1…n}.

• Uses a boolean array b, where all processes can read all entries, but only process i can write b[i].

Initially: all entries in b are false.

• Process i’s program:

• repeat

• b[i] := true; j := 1;

• while (b[i] = true) and (j < i) do

• ifb[j] = truethenb[i] = false; awaitb[j] = falsefi;

• j := j + 1;

• od

• until b[i] := true;

• forj := i + 1 to ndo awaitb[j] = falseod;

• critical section;

• b[i] := false;

• Satisfies mutual exclusion and deadlock freedom.

• Starvation of individual processes is possible.

• Not fast: even in the absence of contention a process needs to access all of the n shared bits.

• Not symmetrical: a process with a smaller identifier has higher priority.

• Uses only n shared bits, hence it is space optimal .

• Non-atomic registers: it is enough to assume that the registers are safe, meaning that writes which are concurrent with reads will return an arbitrary value.