1 / 52

Review of Concurrency

Review of Concurrency. CSE 121 Spring 2003 Keith Marzullo. Concurrency. A process is a program executing on a virtual computer. Issues such as processor speed and multiplexing of shared resources are abstracted away.

lynch
Download Presentation

Review of Concurrency

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Review of Concurrency CSE 121Spring 2003 Keith Marzullo

  2. Concurrency A process is a program executing on a virtual computer. Issues such as processor speed and multiplexing of shared resources are abstracted away. Processes may either explicitly share or implictly multiples memory (thread or lightweight process). Review of Concurrency

  3. Creating processes A possible syntax for creating processes: ID fork(int (*proc)(int), int p); int join(ID i); int foo(int p); int r; ID i = fork(foo, 3); ... r = join(i); Review of Concurrency

  4. Creating processes, II Another syntax for creating processes: cobegin code executed by process 1; || code executed by process 2; || ... coend Review of Concurrency

  5. Properties Concurrent programs are specified using properties, which is a predicate that evaluated over a run of the concurrent program. • Thus, it has the value true or false in each run of the program. • We say that the property holds if it is true in each run. • A property: the value of x is always at least as large as the value of y, and x has the value of 0 at least once. • Not a property: the average number of processes waiting on a lock is less than 1. Review of Concurrency

  6. Safety and liveness properties Any property is either a safety property, a liveness property, or a conjunction of a safety and a liveness property. Review of Concurrency

  7. Safety properties A safety property is of the form nothing bad happens (that is, all states are safe). Examples: The number of processes in a critical section is always less than 2. Let p be the sequence of produced values and c be the sequence of consumed values. c is always a prefix of p. Review of Concurrency

  8. Liveness properties A liveness property is of the form something good happens (that is, an interesting state is eventually achieved). Examples: A process that wishes to enter the critical section eventually does so. p grows without bound. For every value x in p, x is eventually in c. Review of Concurrency

  9. Safety and Liveness Showing a safety property P holds: • find a safety property P’: P’ => P; • show that P’ initially holds; • show that each step of the program maintains P’. Showing a liveness property holds is usually done by induction. Review of Concurrency

  10. Basic properties of environment • Finite progress axiom Each process takes a step infinitely often. • Do all schedulers implement this? • If not, do such schedulers pose problems? • Atomic shared variables Consider {x = A} any concurrent read of x will return x = B; either A or B. {x = B} Review of Concurrency

  11. Atomic Variables {x = 0} cobegin x = x + 1; || x = x - 1; coend {x  {-1, 0 1} } Review of Concurrency

  12. Producer/Consumer problem Let p be the sequence of produced values and c be the sequence of consumed values. • c is always a prefix of p. • For every value x in p, x is eventually in c. Bounded buffer variant: • Always |p| - |c| max Review of Concurrency

  13. Solving producer-consumer int buf, produced = 0; /* produced  {0, 1} */ Producer while (1) { while (produced) ; buf = v; /* logically appends v to p */ produced = 1; } Consumer while (1) { while (!produced) ; c = append(c, buf); produced = 0; } Note that this satisfies max = 1. How do you generalize this for values of max greater than 1? Review of Concurrency

  14. Mutual exclusion • The number of processes in the critical section is never more than 1. • If a process wishes to enter the critical section then it eventually does so. Review of Concurrency

  15. Solving mutual exclusion int in1, in2, turn = 1; while (1) { in1 = 1; turn = 2; while (in2 && turn == 2) ; Critical section for process 1; in1 = 0; } while (1) { in2 = 1; turn = 1; while (in1 && turn == 1) ; Critical section for process 2; in2 = 0; } Review of Concurrency

  16. Proof of Peterson’s algorithm: I int in1, in2, turn = 1, at1 = 0, at2 = 0; while (1) {  in1 = 1; at1 = 1;   turn = 2; at1 = 0;  while (in2 && turn == 2) ; Critical section for process 1; in1 = 0; } while (1) {  in2 = 1; at2 = 1;   turn = 1; at2 = 0;  while (in1 && turn == 1) ; Critical section for process 2; in2 = 0; } Review of Concurrency

  17. Proof of Peterson’s algorithm: II while (1) { {in1  (turn = 1  turn = 2)  at1}  in1 = 1; at1 = 1;  {in1  (turn = 1  turn = 2)  at1}  turn = 2; at1 = 0;  {in1  (turn = 1  turn = 2)  at1} while (in2 && turn == 2) ; {in1  (turn = 1  turn = 2)  at1  (in2  turn = 1  at2)} Critical section for process 1; {in1  (turn = 1  turn = 2)  at1  (in2  turn = 1  at2)} in1 = 0; {in1  (turn = 1  turn = 2)  at1} } Review of Concurrency

  18. Proof of Peterson’s algorithm: III while (1) { {in2  (turn = 1  turn = 2)  at2}  in2 = 1; at2 = 1;  {in2  (turn = 1  turn = 2)  at2}  turn = 1; at2 = 0;  {in2  (turn = 1  turn = 2)  at2} while (in1 && turn == 1) ; {in2  (turn = 1  turn = 2)  at2  (in1  turn = 2  at1)} Critical section for process 2; {in2  (turn = 1  turn = 2)  at2  (in1  turn = 2  at1)} in2 = 0; {in2  (turn = 1  turn = 2)  at2} } Review of Concurrency

  19. Proof of Peterson’s algorithm: IV at CS1 and at CS2 implies false: in1  (turn = 1  turn = 2)  at1  (in2  turn = 1  at2)  in2  (turn = 1  turn = 2)  at2  (in1  turn = 2  at1)  (turn = 1)  (turn = 2) process 1 in non-critical, process 2 trying to enter critical section and is blocked implies false: in1  (turn = 1  turn = 2)  at1  in2  (turn = 1  turn = 2)  at2  in1  turn = 1  in1  in1 Review of Concurrency

  20. Proof of Peterson’s algorithm: V process 1 trying to enter critical section, process 2 trying to enter critical section, and both blocked implies false: in2  turn = 2  in1  turn = 1   (turn = 1)  (turn = 2) Review of Concurrency

  21. The Bakery Algorithm, I int turn[n] = 0, 0, ..., 0; int number = 1, next = 1; // code for process i int j; while (1) {  turn[i] = number; number = number + 1;  for (j = 0; j < n; j++) if (j != i)  await (turn[i] == next);  critical section  next = next + 1;  noncritical code } Review of Concurrency

  22. The Bakery Algorithm, II int turn[n] = 0, 0, ..., 0; // code for process i int j; while (1) {  turn[i] = max(turn[0], ..., turn[n-1]) + 1);  for (j = 0; j < n; j++) if (j != i)  await (turn[j] == 0 || turn[i] < turn[j]);  critical section  turn[i] = 0;  noncritical code } Review of Concurrency

  23. The Bakery Algorithm, III int turn0 = 0, turn1 = 0; cobegin while (1) { turn0 = turn1 + 1; while (turn1 != 0 && turn0 > turn1) ; critical section turn0 = 0; noncritical code } || while (1) { turn1 = turn0 + 1; while (turn0 != 0 && turn1 > turn0) ; critical section turn1 = 0; noncritical code } coend ... but this isn’t safe. Review of Concurrency

  24. The Bakery Algorithm, IV int turn0 = 0, turn1 = 0; cobegin while (1) { turn0 = turn1 + 1; while (turn1 != 0 && turn0 > turn1) ; critical section turn0 = 0; noncritical code } || while (1) { turn1 = turn0 + 1; while (turn0 != 0 && turn1  turn0) ; critical section turn1 = 0; noncritical code } coend ... but this still isn’t safe - suppose each finds the other’s turn value 0. Review of Concurrency

  25. The Bakery Algorithm, V int turn0 = 0, turn1 = 0; cobegin while (1) { turn0 = turn1 + 1; while (turn1 != 0 && turn0 > turn1) ; critical section turn0 = 0; noncritical code } || while (1) { turn1 = turn0 + 1; while (turn0 != 0 && turn1  turn0) ; critical section turn1 = 0; noncritical code } coend ... but this still isn’t safe! Have both see turn values 0, and then let process 1 enter first. Review of Concurrency

  26. The Bakery Algorithm, VI int turn0 = 0, turn1 = 0; cobegin while (1) { turn0 = 1; turn0 = turn1 + 1; while (turn1 != 0 && turn0 > turn1) ; critical section turn0 = 0; noncritical code } || while (1) { turn1 = 1; turn1 = turn0 + 1; while (turn0 != 0 && turn1  turn0) ; critical section turn1 = 0; noncritical code } coend ... can symmetrize by defining [a, b] > [c, d] (a > b) \/ ((a = b) /\ (c > d)) Review of Concurrency

  27. The Bakery Algorithm, VII ... can symmetrize by defining [a, b] > [c, d] to be (a > b) \/ ((a = b) /\ (c > d)) int turn0 = 0, turn1 = 0; cobegin while (1) { turn0 = 1; turn0 = turn1 + 1; while (turn1 != 0 && [turn0, 0] > [turn1, 1]) ; critical section turn0 = 0; noncritical code } || ... coend Review of Concurrency

  28. The Bakery Algorithm, VIII int turn[n] = 0, 0, ..., 0; // code for process i int j; while (1) { turn[i] = 1; turn[i] = max(turn[0], ..., turn[n-1]) + 1); for (j = 0; j < n; j++) if (j != i) do do (turn[j] != 0 && [turn[i], i] > [turn[j], j]) ; critical section turn[i] = 0; noncritical code } Review of Concurrency

  29. Revisiting shared variables Both solutions use busy waiting, which is often an inefficient approach: • A process that is busy waiting cannot proceed until some other process takes a step. • Such a process can relinquish its processor rather than needlessly looping. • Doing so can speed up execution. When is this an invalid argument? Review of Concurrency

  30. Semaphores A semaphore is an abstraction that allows a process to relinquish its processor. Two kinds: binary and general. Binary: P(s): if (s == 1) s = 0; else block; V(s): s = 1; if (there is a blocked process)thenunblock one; (Edsgar Dijkstra, 1965) Review of Concurrency

  31. Semaphores: II General: P(s): if (s > 0) s--; else block; V(s): s++; if (there is a blocked process)thenunblock one; Review of Concurrency

  32. Solving producer-consumer gensem put=max, take=0; int buf[max], in=0, out=0; Producer while (1) { P(put); buf[in] = v; in = (in + 1) % max; V(take); } Consumer while (1) { P(take); c = buf[out]; out = (out + 1) % max; V(put); } Review of Concurrency

  33. Solving mutual exclusion binsem cs=1; while (1) { P(cs); Critical section for process i; V(cs); } Review of Concurrency

  34. Binary vs. general semaphores gensem s=i is replaced with struct s { binsem mtx=1, blk=(i == 0) ? 0 : 1; val = i; } P(s) is replaced with P(s.blk); P(s.mtx); if (--s.val > 0) V(s.blk); V(s.mtx); V(s) is replaced with P(s.mtx); if (++s.val == 1) V(s.blk); V(s.mtx); Review of Concurrency

  35. Monitors An object approach based on critical sections and explicit synchronization variables. Entry procedures obtain mutual exclusion. Condition variables (e.g., c): • c.wait() causes process to wait outside of critical section. • c.signal() causes one blocked process to take over critical section (signalling process temporarily leaves critical section). (Tony Hoare, 1974) Review of Concurrency

  36. Semaphores using monitors monitor gsem { long value; // value of semaphore condition c; // value > 0 public: void entry P(void); void entry V(void); gsem(long initial); }; gsem::gsem(long initial) { value = initial; } void gsem::P(void) { if (value == 0) c.wait(); value--; } void gsem::V(void) { value++; if (value > 0) c.signal(); } Review of Concurrency

  37. Solving producer-consumer monitor PC { const long max = 10; long buf[max], i=0, j=0, in=0; condition notempty; // in > 0 condition notfull; // in < max public: void entry put(long v); long entry get(void); }; void PC::put(long v) { if (in == max) notfull.wait(); buf[i] = v; i = (i + 1) % max; in++; notempty.signal(); } long PC::get(void) { long v; if (in == 0) notempty.wait(); v = buf[j]; j = (j + 1) % max; in--; notfull.signal(); } Review of Concurrency

  38. Monitors using semaphores Assume a monitor m with a single condition c (easily generalized for multiple conditions). Generate: struct m { binsem lock=1; gensem urgent=0, csem=0; long ccount=0, ucount=0 }; Wrap each entry method: P(m.lock); code for entry method if (m.ucount > 0) V(m.urgent); else V(m.lock); Review of Concurrency

  39. Monitors using semaphores II Replace c.wait() with: m.ccount++; if (m.ucount > 0) V(m.urgent); else V(m.lock); P(m.csem); m.ccount--; Replace c.signal() with: m.ucount++; if (m.ccount > 0) { V(m.csem); P(m.urgent); } m.ucount--; Review of Concurrency

  40. Conditions vs. semaphores A semaphore has memory while conditions do not. V(s); ... ... P(s);does not block c.signal(); ... ... c.wait();blocks Review of Concurrency

  41. Programming with conditions A monitor has an invariant (a safety property) that must hold whenever a process enters (and hence leaves) the critical region. A condition strengthens this invariant. When a process requires the stronger property that doesn’t hold, it waits on the condition. When a process establishes the condition, it signals the condition. Review of Concurrency

  42. Readers and Writers Let r be the number of readers and w be the number of writers. invariant I: (0 r)  (0 w 1)  (r = 0 w = 0) readers priority version: let wr be number of waiting readers. readOK: I (w = 0) writeOK: I (w = 0) (r = 0) (wr = 0) Review of Concurrency

  43. Readers and Writers II monitor rprio { long r, w; // number of active read/write long wr; // number of waiting to read condition readOK; condition writeOK; public: void entry startread(void); void entry endread(void); void entry startwrite(void); void entry endwrite(void); rprio(void); }; rprio::rprio(void) { r = w = wr = 0; } Review of Concurrency

  44. Readers and Writers III rprio::startread(void) { if (w > 0) {wr++; readOK.wait(); wr--; } r++; readOK.signal(); } rprio::endread(void) { r--; if (r == 0) writeOK.signal(); } rprio::startwrite(void) { if (r > 0 || w > 0 || wr > 0) writeOK.wait(); w++; } rprio::endwrite(void) { w--; if (wr > 0) readOK.signal(); else writeOK.signal(); } Review of Concurrency

  45. Final signals A signaling process must leave the monitor when it signals. Signals often are performed as the last statement of an entry procedure. This is inefficient. Could require any signals to occur only as the process leaves the entry procedure. Does this limit the power of monitors? Review of Concurrency

  46. Weakening wait and signal {I} if (!c) {Ic} c.wait(); {Ic} {Ic} have c.notify() move one process blocked on c to ready queue. {I} while (!c) {Ic} c.wait(); {I} {Ic} ... and, have c.broadcast() move all processes blocked on c to ready queue. Review of Concurrency

  47. Weakening wait and signal, II rpio::startread(void) { while (w > 0) {wr++; readOK.wait(); wr--; } r++; } rprio::endread(void) { r--; if (r == 0) writeOK.notify(); } rpio::startwrite(void) { while (r > 0 || w > 0 || wr > 0) writeOK.wait(); w++; } rprio::endwrite(void) { w--; if (wr > 0) readOK.broadcast(); else writeOK.notify(); } Review of Concurrency

  48. But... Are explicitly named condition variables really needed? since notifyAll move all blocked processes onto the ready queue, we can have the guard of the while loop sort out who should run: {I} while (!c) {Ic} wait(); {I} {Ic} Review of Concurrency

  49. Having fewer condition variables Many Unix kernels have been structured in a similar manner. user running syscal or interrupt return kernel running sleep on event schedule ready sleep wakeup all processes sleeping on event Review of Concurrency

  50. Unix kernel sleep • sleep takes as a parameter a wait channel, which is typically the address of some data structure. • This address is hashed to choose one of a set of sleep queues. • wakeup wakes up all processes sleeping on the sleep queue. Review of Concurrency

More Related