1 / 52

Concurrency

Concurrency. Levels of concurrency. Instruction machine Statement programming language Unit/subprogram programming language Program machine, operating system. Kinds of concurrency. Co-routines – multiple execution sequences but only one executing at once

wilmet
Download Presentation

Concurrency

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Concurrency

  2. Levels of concurrency • Instruction machine • Statement programming language • Unit/subprogram programming language • Program machine, operating system

  3. Kinds of concurrency • Co-routines – multiple execution sequences but only one executing at once • Physical concurrency – separate instruction sequences executing at the same time • Logical concurrency – time-shared simulation of physical

  4. Subprogram call compared tounit level concurrency Procedure call Task invocation (sequential) (concurrent) B B call A A invoke A start A (B suspended) resume B end A

  5. Synchronization of concurrent tasks Disjoint Cooperative Competitive A B C A B A B (block) (block) (block) e.g., copy between media of different access speeds e.g., updating elements of a data set

  6. A competitive synchronization problem example Modify a bank account: balance $200 Transaction task A – deposit $100 Transaction task B – withdraw $50 Task should have exclusive access Sequence I: A fetch 200 A add 100 A store 300 B fetch 300 B subtract 50 B store 250 Sequence II: A fetch 200 B fetch 200 A add 100 A store 300 B subtract 50 B store 150 Sequence III: B fetch 200 B subtract 50 B store 150 A fetch 150 A add 100 A store 250

  7. (Task) Scheduler • Allocates tasks to processor(s) for a period of time ‘time slice’ • Tracks which tasks are ready to run (e.g., not blocked) • Maintains priority queue of ready tasks • Allocates next task when a processor is free

  8. Concurrency control structures • Create, start, stop, destroy tasks • Provide mutually exclusive access to shared resources • Make competing and cooperating tasks wait (for shared resource or other action) • Three models • Semaphores • Monitors • Message passing

  9. Scheduler • States a task can be in: deadlock danger blocked runnable running new dead

  10. semaphores • control statements: • wait(s) // s is a semaphore • release(s) e.g. competition for shared resource ‘account’ task doDeposit loop get(amount) wait(accountAccess) deposit(amount, account) release(accountAccess) end loop

  11. concurrent processes task doDeposit loop get(amount) wait(accountAccess) deposit(amount, account) release(accountAccess) end loop wait(s) wait(s) deposit release(s) deposit release(s)

  12. semaphores e.g. cooperative synchronization by producer and consumer sharing buffer (queue) task produce loop getTransaction(amount) wait(queueNotFull) putQueue(amount) release(queueNotEmpty) end loop task consume loop wait(queueNotEmpty) getQueue(amount) release(queueNotFull) doTransaction(amount) end loop

  13. complete processes task consumeAndDoDep loop wait(queueNotEmpty) wait(queueAccess) getQueue(amount) release(queueNotFull) release(queueAccess) wait(accountAccess) deposit(amount,account) release(accountAccess) end loop task produce loop getTransaction(amount) wait(queueNotFull) wait(queueAccess) putQueue(amount) release(queueNotEmpty) release(queueAccess) end loop

  14. semaphore implementation • counter + queue of waiting tasks wait(queueNotFull) // count--, proceed release(queueNotFull) // count++ queueNotFull count* 5 queue wait(queueNotFull) // blocked, join queue release(queueNotFull) //unblock first on queue queueNotFull count 0 queue * available space in buffer of transaction queue

  15. semaphore problems • semaphore is a data structure -> need exclusive access to it too! • must be implemented with ‘uninterruptible’ instruction set • vulnerable to deadlock – omitted ‘release’ • vulnerable to data corruption or run time error – omitted ‘wait’ • can’t check statically for correct control – e.g., different units

  16. monitors • The (Concurrent)Pascal / Modula model of concurrency – late 70’s • keywords • concurrent tasks: process, init • data resource shared: monitor, entry, queue • competitive synchronization strategy: • create a monitor to contain all data with shared access and write procedures for accessing; monitor implicitly controls competitive access • write process tasks using the monitor procedures • monitor is essentially an object

  17. monitor example:competitive synchronization type acctMgr = process(acct: account); var amt: real; request: integer; begin cycle <<get request, amt>> if request = 0 then acct.deposit(amt); else acct.withdraw(amt); end end; type account = monitor var bal: real; procedure entry deposit (dep: integer); begin bal := bal + dep end; procedure entry withdraw (wd: integer); begin bal := bal - wd end; begin bal := 0.0 end;

  18. monitor example:competitive synchronization << following the type declarations >> var bankAcct account; mgr1, mgr2, mgr3: acctMgr; begin init bankAcct, mgr1(bankAcct), mgr1(bankAcct), mgr1(bankAcct); end;

  19. monitors andcooperative synchronization • type queue: semaphore-like object used inside a monitor • two procedures:delayandcontinue • similar to wait and release BUT • delayalways blocks process (task) so programmer of monitor must control its use • delay and continue override monitor access control

  20. monitor example: (Sebesta, p.531)cooperative synchronization consumer: process(buffer) producer: process(buffer) new_buffer:databuf buffer.deposit() buffer.fetch() buf: array … procedure deposit procedure fetch sender_q: queue … receiver_q: queue …

  21. monitor problems • central data structure model is not appropriate for distributed systems, a common environment for concurrency • terminating processes occurs at end of program only

  22. message passing • message sent from one sender task to another receiver task and returned • message may have parameters – value, result or value-result • message is sent when tasks synchronize (both blocked and need the message to continue) • time between send and return is rendezvous (sender task is suspended)

  23. concurrency with messages sender receiver sender receiver message statement receive message blocked blocked message statement receive message suspended suspended rendezvous

  24. example 1: receiver task structure (Ada) • specification (pseudo-Ada-83) task type acct_access is entry deposit (dep : in integer); end acct_access; • body task body acct_access is balance, count : integer; begin loop accept deposit(dep : in integer) do balance := balance + dep; end deposit; count := count + 1; end loop; end acct_access;

  25. extended example 2: receiver task • body task body acct_access is balance, count : integer; begin balance := 0; count := 0; loop select accept deposit (dep : in integer) do balance := balance + dep; end deposit; or accept getBal (wd : out integer) do bal := balance; end getBal; end select; count := count + 1; end loop; end acct_access; • specification task type acct_access is entry deposit (dep : in integer); entry getBal(bal : out integer); end;

  26. important points • receiver is only ready to receive a message when execution comes to an ‘accept’ clause • sender is suspended until accept ‘do..end’ is completed • select is a guarded command – eligible cases selected at random • tasks can be both senders and receivers • pure receivers are ‘servers’ • pure senders are ‘actors’

  27. message example: (Sebesta, p.540)cooperative/competitive synchronization CONSUMER TASK PRODUCER TASK loop buffer.fetch(k) consume k end loop loop produce k buffer.deposit(k) end loop BUF_TASK TASK BUF: array … loop select guarded deposit(k) or guarded fetch(k) end select end loop

  28. messages to protected objects • tasks to protect shared data are slow (rendezvous are slow) • protected objects are a simpler, efficient version • similar to monitors • distinction of write access (exclusive) and read access (shared) (SEE ALSO BINARY SEMAPHORES)

  29. asynchronous messages • no rendezvous • the sender does not block after sending message (therefore, does not know it has been executed) • the receiver does not block if a message is not there to be received (continues to some other processing)

  30. asynchronous messages sender receiver sender receiver receive message (no message) message statement queued receive message execute message procedure message statement queued

  31. Ada’s (95) asynchronous ‘select’ select or then abort delay k ‘accept’ message

  32. related features of Ada (p.542) • task initiation and termination • initiation is like a procedure call account_access(); • termination when • ‘completed’ - code finished or exception raised • all dependent tasks terminated OR • stopped at terminate clause • caller task or procedure and sibling complete or at terminate

  33. related features of Ada (p.542) • priorities for execution from ready queue pragma priority (System.priority.First) • compiler directive • does not effect guarded command selection

  34. related features of Ada (p.542) • binary semaphores built as tasks to protect data access (pseudocode) task sem is entry wait; entry release; end sem; task body sem begin loop accept wait; accept release; end loop; end sem; <in user task> aSem : sem; aSem(); ... aSem.wait(); point.x = xi; aSem.release(); ...

  35. Java concurrency: Threads • classes and interfaces • Thread, Runnable • ThreadGroup • ThreadDeath • Timer, TimerTask • Object

  36. creating a thread • extend Thread class NewThread extends Thread {} NewThread n = new NewThread(); n.start(); • implement a Runnable interface NewT implements Runnable {} NewT rn = new NewT() Thread t = new Thread(rn); t.start();

  37. terminating a thread • stop(); //deprecated • throws a ThreadDeath object • set thread reference to null

  38. thread states wait() notify[All]() running scheduler yield() IO block unblock not runnable new runnable sleep(.) times out start() terminate dead

  39. priorities • Thread class constants, methods for managing priorities • setPriority, getPriority • MAX_PRIORITY, MIN_PRIORITY, NORM_PRIORITY • Timeslicing is not in the java runtime scheduler; interrupts for higher priority only • yield()

  40. competitive synchronization • synchronize keyword for methods or block of code – associates a lock with access to a resource • object acts like a monitor • re-entrant locks • one synchronized method can call another without deadlock

  41. cooperative synchronization • wait() and notify(), notifyAll() • in Objectclass • like delay and continue, wait and release • example code from java.sun.com tutorial

  42. Scheduling tasks • Timer and TimerTask • Timer is a special thread • for shceduling a task at some future time

  43. thread groups • all threads belong to groups • default group is main • threadgroups form a hierarchy (like a directory structure) • access control (e.g. security) is by management of thread groups

  44. statement level concurrency • concurrent execution with minimal communication • useless without multiple processors • SIMD (Single Instruction Multiple Data) • simpler, more restricted • MIMD (Multiple Instruction Multiple Data) • complex, more powerful

  45. e.g. array of points – find closest to origin public int closest(Point[] p) { double minDist=Double.MAX_VALUE; int idx; for (int i=0; i<p.length; i++) { double dist = p[i].distance();// independent if (dist<minDist) idx = i; // synchronized } return idx; }

  46. e.g. array of points – find closest to origin – SIMD concurrent execution public int closest(Point[] p) { double minDist=Double.MAX_VALUE; int idx; double[] dist = new double[p.length]; forall (int i=0:p.length) // pseudo-code dist[i] = p[i].distance(); for (int i=0; i<p.length; i++) if (dist[i]<minDist) idx = i; return idx; }

  47. sequential vs concurrent i = 0,1,...,n-1 i=0 i<n i++ i=0 i<n i++

  48. high performance Fortran HPF • concurrency • FORALL – process elements in lockstep parallel • INDEPENDENT –iterated statements can be run in any order • distribution to processors • DISTRIBUTE – pattern for allocating array elements to processors • ALIGN – matching allocation of arrays with eachother

  49. FORALL: note synchronization FORALL ( I = 2 : 4 ) A(I) = A(I-1) + A(I+1) C(I) = B(I) * A(I+1) END FORALL get all A[I-1], A[I+1], calc sums assign sums to all A[I] get all B[I], A[I+1], calc products assign products to all A[I]

  50. INDEPENDENT compiler directive !HPF$ INDEPENDENT DO J = 1, 3 A(J) = A( B(J) ) C(J) = A(J) * B(A(J)) END DO declares iterations independent and OK to execute in parallel

More Related