1 / 145

Analysis of Multithreaded Programs

Analysis of Multithreaded Programs. Martin Rinard Laboratory for Computer Science Massachusetts Institute of Technology. What is a multithreaded program?. NOT general parallel programs No message passing No tuple spaces No functional programs No concurrent constraint programs

taro
Download Presentation

Analysis of Multithreaded Programs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Analysis of Multithreaded Programs Martin Rinard Laboratory for Computer Science Massachusetts Institute of Technology

  2. What is a multithreaded program? NOT general parallel programs No message passing No tuple spaces No functional programs No concurrent constraint programs NOT just multiple threads of control No continuations No reactive systems Multiple Parallel Threads Of Control Lock Acquire and Release read write Shared Mutable Memory

  3. Why do programmers use threads? • Performance (parallel computing programs) • Single computation • Execute subcomputations in parallel • Example: parallel sort • Program structuring mechanism (activity management programs) • Multiple activities • Thread for each activity • Example: web server • Properties have big impact on analyses

  4. Practical Implications • Threads are useful and increasingly common • POSIX threads standard for C, C++ • Java has built-in thread support • Widely used in industry • Threads introduce complications • Programs viewed as more difficult to develop • Analyses must handle new model of execution • Lots of interesting and important problems!

  5. Outline • Examples of multithreaded programs • Parallel computing program • Activity management program • Analyses for multithreaded programs • Handling data races • Future directions

  6. Parallel Sort

  7. Example - Divide and Conquer Sort 7 4 6 1 3 5 8 2

  8. 8 2 7 4 6 1 3 5 Example - Divide and Conquer Sort 7 4 6 1 3 5 8 2 Divide

  9. 8 2 7 4 6 1 3 5 Example - Divide and Conquer Sort 7 4 6 1 3 5 8 2 Divide 4 7 1 6 3 5 2 8 Conquer

  10. 8 2 7 4 6 1 3 5 Example - Divide and Conquer Sort 7 4 6 1 3 5 8 2 Divide 4 7 1 6 3 5 2 8 Conquer 1 4 6 7 2 3 5 8 Combine

  11. 8 2 7 4 6 1 3 5 Example - Divide and Conquer Sort 7 4 6 1 3 5 8 2 Divide 4 7 1 6 3 5 2 8 Conquer 1 4 6 7 2 3 5 8 Combine 1 2 3 4 5 6 7 8

  12. Divide and Conquer Algorithms • Lots of Recursively Generated Concurrency • Solve Subproblems in Parallel

  13. Divide and Conquer Algorithms • Lots of Recursively Generated Concurrency • Recursively Solve Subproblems in Parallel

  14. Divide and Conquer Algorithms • Lots of Recursively Generated Concurrency • Recursively Solve Subproblems in Parallel • Combine Results in Parallel

  15. “Sort n Items in d, Using t as Temporary Storage” • void sort(int *d, int *t, int n) • if (n > CUTOFF) { • spawn sort(d,t,n/4); • spawn sort(d+n/4,t+n/4,n/4); • spawn sort(d+2*(n/4),t+2*(n/4),n/4); • spawn sort(d+3*(n/4),t+3*(n/4),n-3*(n/4)); • sync; • spawn merge(d,d+n/4,d+n/2,t); • spawn merge(d+n/2,d+3*(n/4),d+n,t+n/2); • sync; • merge(t,t+n/2,t+n,d); • } else insertionSort(d,d+n);

  16. “Sort n Items in d, Using t as Temporary Storage” • void sort(int *d, int *t, int n) • if (n > CUTOFF) { • spawn sort(d,t,n/4); • spawn sort(d+n/4,t+n/4,n/4); • spawn sort(d+2*(n/4),t+2*(n/4),n/4); • spawn sort(d+3*(n/4),t+3*(n/4),n-3*(n/4)); • sync; • spawn merge(d,d+n/4,d+n/2,t); • spawn merge(d+n/2,d+3*(n/4),d+n,t+n/2); • sync; • merge(t,t+n/2,t+n,d); • } else insertionSort(d,d+n); Divide array into subarrays and recursively sort subarrays in parallel

  17. 7 4 6 1 3 5 8 2 “Sort n Items in d, Using t as Temporary Storage” • void sort(int *d, int *t, int n) • if (n > CUTOFF) { • spawn sort(d,t,n/4); • spawn sort(d+n/4,t+n/4,n/4); • spawn sort(d+2*(n/4),t+2*(n/4),n/4); • spawn sort(d+3*(n/4),t+3*(n/4),n-3*(n/4)); • sync; • spawn merge(d,d+n/4,d+n/2,t); • spawn merge(d+n/2,d+3*(n/4),d+n,t+n/2); • sync; • merge(t,t+n/2,t+n,d); • } else insertionSort(d,d+n); Subproblems Identified Using Pointers Into Middle of Array d d+n/4 d+n/2 d+3*(n/4)

  18. 4 7 1 6 3 5 2 8 “Sort n Items in d, Using t as Temporary Storage” • void sort(int *d, int *t, int n) • if (n > CUTOFF) { • spawn sort(d,t,n/4); • spawn sort(d+n/4,t+n/4,n/4); • spawn sort(d+2*(n/4),t+2*(n/4),n/4); • spawn sort(d+3*(n/4),t+3*(n/4),n-3*(n/4)); • sync; • spawn merge(d,d+n/4,d+n/2,t); • spawn merge(d+n/2,d+3*(n/4),d+n,t+n/2); • sync; • merge(t,t+n/2,t+n,d); • } else insertionSort(d,d+n); Sorted Results Written Back Into Input Array d d+n/4 d+n/2 d+3*(n/4)

  19. 4 1 4 7 1 6 6 7 3 2 3 5 2 5 8 8 “Merge Sorted Quarters of d Into Halves of t” • void sort(int *d, int *t, int n) • if (n > CUTOFF) { • spawn sort(d,t,n/4); • spawn sort(d+n/4,t+n/4,n/4); • spawn sort(d+2*(n/4),t+2*(n/4),n/4); • spawn sort(d+3*(n/4),t+3*(n/4),n-3*(n/4)); • sync; • spawn merge(d,d+n/4,d+n/2,t); • spawn merge(d+n/2,d+3*(n/4),d+n,t+n/2); • sync; • merge(t,t+n/2,t+n,d); • } else insertionSort(d,d+n); d t t+n/2

  20. 1 1 4 2 3 6 4 7 5 2 3 6 7 5 8 8 “Merge Sorted Halves of t Back Into d” • void sort(int *d, int *t, int n) • if (n > CUTOFF) { • spawn sort(d,t,n/4); • spawn sort(d+n/4,t+n/4,n/4); • spawn sort(d+2*(n/4),t+2*(n/4),n/4); • spawn sort(d+3*(n/4),t+3*(n/4),n-3*(n/4)); • sync; • spawn merge(d,d+n/4,d+n/2,t); • spawn merge(d+n/2,d+3*(n/4),d+n,t+n/2); • sync; • merge(t,t+n/2,t+n,d); • } else insertionSort(d,d+n); d t t+n/2

  21. 7 4 6 1 3 5 8 2 “Use a Simple Sort for Small Problem Sizes” • void sort(int *d, int *t, int n) • if (n > CUTOFF) { • spawn sort(d,t,n/4); • spawn sort(d+n/4,t+n/4,n/4); • spawn sort(d+2*(n/4),t+2*(n/4),n/4); • spawn sort(d+3*(n/4),t+3*(n/4),n-3*(n/4)); • sync; • spawn merge(d,d+n/4,d+n/2,t); • spawn merge(d+n/2,d+3*(n/4),d+n,t+n/2); • sync; • merge(t,t+n/2,t+n,d); • } else insertionSort(d,d+n); d d+n

  22. 7 4 1 6 3 5 8 2 “Use a Simple Sort for Small Problem Sizes” • void sort(int *d, int *t, int n) • if (n > CUTOFF) { • spawn sort(d,t,n/4); • spawn sort(d+n/4,t+n/4,n/4); • spawn sort(d+2*(n/4),t+2*(n/4),n/4); • spawn sort(d+3*(n/4),t+3*(n/4),n-3*(n/4)); • sync; • spawn merge(d,d+n/4,d+n/2,t); • spawn merge(d+n/2,d+3*(n/4),d+n,t+n/2); • sync; • merge(t,t+n/2,t+n,d); • } else insertionSort(d,d+n); d d+n

  23. Key Properties of Parallel Computing Programs • Structured form of multithreading • Parallelism confined to small region • Single thread coming in • Multiple threads exist during computation • Single thread going out • Deterministic computation • Tasks update disjoint parts of data structure in parallel without synchronization • May also have parallel reductions

  24. Web Server

  25. Main Loop Client Threads Accept new connection Start new client thread

  26. Main Loop Client Threads Accept new connection Start new client thread

  27. Main Loop Client Threads Accept new connection Start new client thread Wait for input Produce output

  28. Main Loop Client Threads Accept new connection Start new client thread Wait for input Produce output

  29. Main Loop Client Threads Accept new connection Wait for input Start new client thread Wait for input Produce output

  30. Main Loop Client Threads Accept new connection Wait for input Start new client thread Wait for input Produce output

  31. Main Loop Client Threads Accept new connection Wait for input Wait for input Start new client thread Produce output Wait for input Produce output

  32. Main Loop Client Threads Accept new connection Wait for input Wait for input Start new client thread Produce output Produce output Wait for input Produce output

  33. Main Loop Class Main { static public void loop(ServerSocket s) { c = new Counter(); while (true) { Socket p = s.accept(); Worker t = new Worker(p,c); t.start(); } } Accept new connection Start new client thread

  34. Worker threads class Worker extends Thread { Socket s; Counter c; public void run() { out = s.getOutputStream(); in = s.getInputStream(); while (true) { inputLine = in.readLine(); c.increment(); if (inputLine == null) break; out.writeBytes(inputLine + "\n"); } } } Wait for input Increment counter Produce output

  35. Synchronized Shared Counter Class Counter { int contents = 0; synchronized void increment() { contents++; } } Acquire lock Increment counter Release lock

  36. Simple Activity Management Programs • Fixed, small number of threads • Based on functional decomposition Device Management Thread User Interface Thread Compute Thread

  37. Key Properties of Activity Management Programs • Threads manage interactions • One thread per client or activity • Blocking I/O for interactions • Unstructured form of parallelism • Object is unit of sharing • Mutable shared objects (mutual exclusion) • Private objects (no synchronization) • Read shared objects (no synchronization) • Inherited objects passed from parent to child

  38. Why analyze multithreaded programs? Discover or certify absence of errors (multithreading introduces new kinds of errors) Discover or verify application-specific properties (interactions between threads complicate analysis) Enable optimizations (new kinds of optimizations with multithreading) (complications with traditional optimizations)

  39. Classic Errors in Multithreaded Programs Deadlocks Data Races

  40. Deadlock Deadlock if circular waiting for resources (typically mutual exclusion locks) Thread 1: lock(l); lock(m); x = x + y; unlock(m); unlock(l); Thread 2: lock(m); lock(l); y = y * x; unlock(l); unlock(m);

  41. Deadlock Deadlock if circular waiting for resources (typically mutual exclusion locks) Thread 1: lock(l); lock(m); x = x + y; unlock(m); unlock(l); Thread 2: lock(m); lock(l); y = y * x; unlock(l); unlock(m); Threads 1 and 2 Start Execution

  42. Deadlock Deadlock if circular waiting for resources (typically mutual exclusion locks) Thread 1: lock(l); lock(m); x = x + y; unlock(m); unlock(l); Thread 2: lock(m); lock(l); y = y * x; unlock(l); unlock(m); Thread 1 acquires lock l

  43. Deadlock Deadlock if circular waiting for resources (typically mutual exclusion locks) Thread 1: lock(l); lock(m); x = x + y; unlock(m); unlock(l); Thread 2: lock(m); lock(l); y = y * x; unlock(l); unlock(m); Thread 2 acquires lock m

  44. Deadlock Deadlock if circular waiting for resources (typically mutual exclusion locks) Thread 1: lock(l); lock(m); x = x + y; unlock(m); unlock(l); Thread 2: lock(m); lock(l); y = y * x; unlock(l); unlock(m); Thread 1 holds l and waits for m while Thread 2 holds m and waits for l

  45. Data Races Data race if two parallel threads access same memory location and at least one access is a write Data race No data race A[i] = v A[i] = v A[i] = v; A[j] = w; || A[j] = w A[j] = w

  46. Synchronization and Data Races No data race if synchronization separates accesses Thread 1: lock(l); x = x + 1; unlock(l); Thread 2: lock(l); x = x + 2; unlock(l); Synchronization protocol: Associate lock with data Acquire lock to update data atomically

  47. Why are data races errors? • Exist correct programs which contain races • But most races are programming errors • Code intended to execute atomically • Synchronization omitted by mistake • Consequences can be severe • Nondeterministic, timing-dependent errors • Data structure corruption • Complicates analysis and optimization

  48. Overview of Analyses for Multithreaded Programs Key problem: interactions between threads • Flow-insensitive analyses • Escape analyses • Dataflow analyses • Explicit parallel flow graphs • Interference summary analysis • State space exploration

  49. Escape Analyses

  50. Program With Allocation Sites void main(i,j) ——————— ——————— ——————— void compute(d,e) ———— ———— ———— void evaluate(i,j) —————— —————— —————— void multiplyAdd(a,b,c) ————————— ————————— ————————— void abs(r) ———— ———— ———— void scale(n,m) —————— —————— void multiply(m) ———— ———— ———— void add(u,v) —————— ——————

More Related