1 / 114

Distributed Computing

Distributed Computing. Adam Morrison. Outline. Administration Background Mutual exclusion. Administration. Mandatory attendance in 11 of the 13 lectures https://www.cs.tau.ac.il/~ afek/dc19.html Grade: 5% class participation 40% homework (~5 in the semester) 55% project. Project.

vanwagenen
Download Presentation

Distributed Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Distributed Computing Adam Morrison Some slides based on “Art of Multiprocessor Programming”

  2. Outline Administration Background Mutual exclusion

  3. Administration • Mandatory attendance in 11 of the 13 lectures • https://www.cs.tau.ac.il/~afek/dc19.html • Grade: • 5% class participation • 40% homework (~5 in the semester) • 55% project

  4. Project • Analyze some topic or papers • Submit 2-5 pages summarizing your findings • Give a 15-minute talk

  5. Outline Administration Background Mutual exclusion

  6. Distributed computing code Computing:

  7. Distributed computing code Computing: Distributed computing: code code code

  8. Models code Message-passing: Communicate by messages over the network code network code

  9. Models code Message-passing: Communicate by messages over the network Shared-memory: Communicate by reading/writing shared memory code network code code code memory code

  10. Models Message-passing & shared-memory are closely connected Shared-memory can simulate message-passing Proof: Implement message queues in software Message-passing can simulate shared-memory* *under assumptions on # of failures Proof: “Sharing Memory Robustly in Message-Passing Systems” [Attiya, Bar-Noy, Dolev 1990] Dijkstra award

  11. This course code Message-passing: Communicateby messages over the network Shared-memory: Communicateby reading/writing shared memory code network code code code memory code

  12. This course • Foundations of distributed computing • Mainly about communication/synchronization between processors • Not about parallelism • Problems of agreement will come up a lot • New this year: blockchains (theory of)

  13. Outline Administration Background Mutual exclusion

  14. Shared-memory model code code code memory read write read read read

  15. time Shared-memory model • Execution consists of a sequence of steps • Each step is a read/write of some memory location • (Don’t care about local computation!) P1 P2

  16. time Shared-memory model • Execution consists of a sequence of steps • Each step is a read/write of some memory location • (Don’t care about local computation!) • Asynchronous system • Sudden unpredictable delays • Some scheduler picks next step in an arbitrary way P1 P2

  17. Mutual exclusion • Lock is an object (variable) with basic methods: • Lock() (acquire) • Unlock() (release) • Code between Lock(&L) and • Unlock(&L) is a critical section of L • The lock algorithm guarantees mutual exclusion: of all callers to lock(), only one can finish and enter the critical section until it exits the CS by calling unlock() Lock L; Lock(&L); … Unlock(&L);

  18. Mutual exclusion Process execution consists of repeat forever: remainder section entry section critical section exit section Progress assumption: process can only halt while in the remainder section defined by mutual exclusion algorithm

  19. time Mutual exclusion formalism Interval (a0,a1) is the subsequence of events starting with a0 & ending with a1 a0 a1

  20. time Mutual exclusion formalism Overlapping intervals

  21. time Mutual exclusion formalism • Disjoint intervals • We write A-> B (A precedesB) • End event of A precedes start event of B • Precedence is a partial order • A->B and B->A might both be false A B

  22. time Mutual exclusion property Let CSik be process i's k-th critical section execution and CSjmbe process j's m-th critical section execution Then either: CSik-> CSjm CSjm-> CSik CSjm CSik

  23. Plan 2-process solution N-process solution Fairness Inherent costs

  24. First attempt: flag principle Entry (“lock()”) P0 P1 flag1 := 1 while (flag0){} -- CS –- flag1 := 0 flag0 := 1 while (flag1){} -- CS –- flag0 := 0 flag1

  25. First attempt: flag principle Exit (“unlock()”) P0 P1 flag1 := 1 while (flag0){} -- CS –- flag1 := 0 flag0 := 1 while (flag1){} -- CS –- flag0 := 0 flag1

  26. First attempt: flag principle flag1 flag0 P0 P1 flag1 := 1 while (flag0){} -- CS –- flag1 := 0 flag0 := 1 while (flag1){} -- CS –- flag0 := 0 No other flag up My flag up time

  27. Mutual exclusion proof • Assume CSik overlaps CSjm • Consider each process's last (kth and mth) read and write before entering • Derive a contradiction

  28. Proof flag0 := 1 while (flag1){} -- CS –- flag0 := 0 flag1 := 1 while (flag0){} -- CS –- flag1 := 0 From code: write0(flag0=true)  read0(flag1==false)  CS0 write1(flag1=true)  read1(flag0==false)  CS1

  29. Proof flag0 := 1 while (flag1){} -- CS –- flag0 := 0 flag1 := 1 while (flag0){} -- CS –- flag1 := 0 From code: write0(flag0=true)  read0(flag1==false)  CS0 write1(flag1=true)  read1(flag0==false)  CS1 From assumption: read0(flag1==false)  write1(flag1=true) read1(flag0==false)  write0(flag0=true)

  30. Proof flag0 := 1 while (flag1){} -- CS –- flag0 := 0 flag1 := 1 while (flag0){} -- CS –- flag1 := 0 From code: write0(flag0=true)  read0(flag1==false)  CS0 write1(flag1=true)  read1(flag0==false)  CS1 From assumption: read0(flag1==false)  write1(flag1=true) read1(flag0==false)  write0(flag0=true)

  31. Proof flag0 := 1 while (flag1){} -- CS –- flag0 := 0 flag1 := 1 while (flag0){} -- CS –- flag1 := 0 From code: write0(flag0=true)  read0(flag1==false)  CS0 write1(flag1=true)  read1(flag0==false)  CS1 From assumption: read0(flag1==false)  write1(flag1=true) read1(flag0==false)  write0(flag0=true)

  32. Proof flag0 := 1 while (flag1){} -- CS –- flag0 := 0 flag1 := 1 while (flag0){} -- CS –- flag1 := 0 From code: write0(flag0=true)  read0(flag1==false)  CS0 write1(flag1=true)  read1(flag0==false)  CS1 From assumption: read0(flag1==false)  write1(flag1=true) read1(flag0==false)  write0(flag0=true)

  33. Proof flag0 := 1 while (flag1){} -- CS –- flag0 := 0 flag1 := 1 while (flag0){} -- CS –- flag1 := 0 From code: write0(flag0=true)  read0(flag1==false)  CS0 write1(flag1=true)  read1(flag0==false)  CS1 From assumption: read0(flag1==false)  write1(flag1=true) read1(flag0==false)  write0(flag0=true)

  34. A cycle! Impossible in a total order (of events) Proof flag0 := 1 while (flag1){} -- CS –- flag0 := 0 flag1 := 1 while (flag0){} -- CS –- flag1 := 0 From code: write0(flag0=true)  read0(flag1==false)  CS0 write1(flag1=true)  read1(flag0==false)  CS1 From assumption: read0(flag1==false)  write1(flag1=true) read1(flag0==false)  write0(flag0=true)

  35. Problem: progress P0 P1 flag1 := 1 while (flag0){} -- CS –- flag1 := 0 flag0 := 1 while (flag1){} -- CS –- flag0 := 0 flag1

  36. Deadlock and progress • Deadlock is a state in which no thread can complete its operation, because they’re all waiting for some condition that will never happen • Previous slide is an example • Mutual exclusion progress guarantees: • Deadlock-freedom: • If a thread is trying to enter the critical section then somethread must eventually enter the critical section • Starvation-freedom: • If a thread is trying to enter the critical section then this thread must eventually enter the critical section

  37. 2nd attempt victim := 1 flag1:= 1 while (flag0 && victim==1){} -- CS –- flag1 := 0 victim := 0 flag0:= 1 while (flag1 && victim==0){} -- CS –- flag0 := 0 flag1

  38. Peterson’s algorithm flag1 := 1 victim := 1 while (flag0 && victim==1){} -- CS –- flag1 := 0 flag0 := 1 victim := 0 while (flag1 && victim==0){} -- CS –- flag0 := 0 flag1

  39. Peterson’s mutual exclusion proof flag[i] := 1 victim := i while (flag[1-i]&& victim==i) {} -- CS –- flag[i] := 0 • Assume both in CS

  40. Peterson’s mutual exclusion proof flag[i] := 1 victim := i while (flag[1-i]&& victim==i) {} -- CS –- flag[i] := 0 • Assume both in CS • Suppose P0 is last to • write victim (writes 0) P0 writes victim:=0

  41. Peterson’s mutual exclusion proof flag[i] := 1 victim := i while (flag[1-i]&& victim==i) {} -- CS –- flag[i] := 0 • Assume both in CS • Suppose P0 is last to • write victim (writes 0) • So it reads flag1==0 P0 reads flag1==0 P0 writes victim:=0

  42. Peterson’s mutual exclusion proof flag[i] := 1 victim := i while (flag[1-i]&& victim==i) {} -- CS –- flag[i] := 0 • Assume both in CS • Suppose P0 is last to • write victim (writes 0) • So it reads flag1==0 • So P1 writes flag1 later P0 reads flag1==0 P1 writes flag1:=1 P0 writes victim:=0

  43. Peterson’s mutual exclusion proof flag[i] := 1 victim := i while (flag[1-i]&& victim==i) {} -- CS –- flag[i] := 0 • Assume both in CS • Suppose P0 is last to • write victim (writes 0) • So it reads flag1==0 • So P1 writes flag1 later • But then it writes victim=> contradiction P0 reads flag1==0 P1 writes flag1:=1 P1 writes victim:=1 P0 writes victim:=0

  44. Peterson’s deadlock-freedom proof flag[i] := 1 victim := i while (flag[1-i]&& victim==i) {} -- CS –- flag[i] := 0 • Process blocked: • Only at while loop • Only if other’s flag==1 • Only if it is victim • In solo execution: other’s flag is false • Otherwise: somebody isn’t the victim

  45. Peterson’s starvation-freedom proof flag[i] := 1 victim := i while (flag[1-i]&& victim==i) {} -- CS –- flag[i] := 0 • Process iblocked only if 1-i repeatedly enters so that • flag[1-i] && victim==i • But when 1-i re-enters: • It sets victim to 1-i • So igets in

  46. Plan 2-process solution N-process solution Fairness Inherent costs

  47. Filter algorithm • Generalization of Peterson’s • N-1 levels (waiting rooms)that a process has to go through to enter CS • At each level: • At least one enters • At least one blocked if many try • I.e., at most N-i pass into level i • Only one process makes it to CS (level N-1) remainder cs Art of Multiprocessor Programming

  48. Filter algorithm int level[N] // level of process i int victim[N] // victim at level L for (L = 1; L < N; L++) { level[i] = L victim[L] = i while (($ k!=i:level[k]>=L) && victim[L] == i) {} } -- CS –- level[i] = 0 flag1

  49. Filter algorithm int level[N] // level of process i int victim[N] // victim at level L for (L = 1; L < N; L++) { level[i] = L victim[L] = i while (($ k!=i:level[k]>=L) && victim[L] == i) {} } -- CS –- level[i] = 0 flag1 One level at a time

  50. Filter algorithm int level[N] // level of process i int victim[N] // victim at level L for (L = 1; L < N; L++) { level[i] = L victim[L] = i while (($ k!=i:level[k]>=L) && victim[L] == i) {} } -- CS –- level[i] = 0 flag1 Announce intention to enter level L

More Related