1 / 36

Concurrent Programming

Concurrent Programming. Outline. Topics: Shared variables Synchronizing with semaphores Suggested reading: 12.4~12.5. Approaches to Concurrency. Processes Hard to share resources: easy to avoid unintended sharing High overhead in adding/removing clients Threads

jamese
Download Presentation

Concurrent Programming

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Concurrent Programming

  2. Outline • Topics: • Shared variables • Synchronizing with semaphores • Suggested reading: • 12.4~12.5

  3. Approaches to Concurrency • Processes • Hard to share resources: easy to avoid unintended sharing • High overhead in adding/removing clients • Threads • Easy to share resources: Perhaps too easy • Medium overhead • Not much control over scheduling policies • Difficult to debug: event orderings not repeatable • I/O Multiplexing • Tedious and low level • Total control over scheduling • Very low overhead • Cannot create as fine grained a level of concurrency • Does not make use of multi-core

  4. Process context Code, data, and stack stack Program context: Data registers Condition codes Stack pointer (SP) Program counter (PC) Kernel context: VM structures Descriptor table brk pointer SP shared libraries brk run-time heap read/write data PC read-only code/data 0 Traditional view of a process • Process = process context + code, data, and stack

  5. Thread (main thread) Code and Data shared libraries stack brk SP run-time heap read/write data Thread context: Data registers Condition codes Stack pointer (SP) Program counter (PC) PC read-only code/data 0 Kernel context: VM structures Descriptor table brk pointer Alternate view of a process • Process = thread + code, data, and kernel context

  6. A process with multiple threads • Multiple threads can be associated with a process • Each thread has its own logical control flow (sequence of PC values) • Each thread shares the same code, data, and kernel context • Each thread has its own thread id (TID)

  7. Shared code and data Thread 1 (main thread) Thread 2 (peer thread) shared libraries stack 1 stack 2 run-time heap read/write data Thread 1 context: Data registers Condition codes SP1 PC1 Thread 2 context: Data registers Condition codes SP2 PC2 read-only code/data 0 Kernel context: VM structures Descriptor table brk pointer A process with multiple threads

  8. Logical View of Threads • Threads associated with process form a pool of peers • Unlike processes which form a tree hierarchy Threads associated with process foo Process hierarchy P0 T2 T4 T1 P1 shared code, data and kernel context sh sh sh T3 T5 foo bar

  9. Pros and cons of thread-based designs • + Easy to share data structures between threads • e.g., logging information, file cache. • + Threads are more efficient than processes • - Unintentionalsharing can introduce subtle and hard-to-reproduce errors! • The ease with which data can be shared is both the greatest strength and the greatest weakness of threads • Hard to know which data shared & with private

  10. Shared variables in threaded C programs • Which variables in a C program are shared or not? • What is the memory model for threads? • How are instances of the variable mapped to memory? • How many threads reference each of these instances?

  11. Threads memory model • Conceptual model: • Each thread runs in the context of a process. • Each thread has its own separatethread context • Thread ID, stack, stack pointer, program counter, condition codes, and general purpose registers • All threads share the remaining process context • Code, data, heap, and shared library segments of the process virtual address space • Open files and installed handlers

  12. Threads memory model • Operationally, this model is not strictly enforced: • While register values are truly separate and protected.... • Any thread can read and write the stack of any other thread. The MISMATCH between the CONCEPTUAL and OPERATION MODEL is a source of confusion and errors

  13. Shared variable analysis 1#include "csapp.h" 2 #define N 2 3 void *thread(void *vargp); 4 5 char **ptr; /* global variable */ 6

  14. Shared variable analysis 7 int main() 8 { 9 int i; 10 pthread_t tid; 11 char *msgs[N] = { 12 "Hello from foo", 13 "Hello from bar" 14 }; 15 16 ptr = msgs; 17 for (i = 0; i < N; i++) 18 Pthread_create(&tid, NULL, thread, (void *)i); 19 Pthread_exit(NULL); 20 } 1#include "csapp.h" 2 #define N 2 3 void *thread(void *vargp); 4 5 char **ptr; 6 /* global variable */

  15. Shared variable analysis 21 void *thread(void *vargp) 22 { 23 int myid = (int)vargp; 24 static int cnt = 0; 25 26 printf("[%d]:%s(cnt=%d)\n", myid, ptr[myid], ++cnt); 27 }

  16. Mapping Variable Instances to Memory • Global variables • Def: Variable declared outside of a function • Virtual memory contains exactly one instance of any global variable • Local variables • Def: Variable declared inside function without static attribute • Each thread stack contains one instance of each local variable • Local static variables • Def: Variable declared inside function with the static attribute • Virtual memory contains exactly one instance of any local static variable.

  17. Shared variable analysis • Which variables are shared? Variable Referenced by Referenced by Referenced by instance main thread? peer thread-0? peer thread-1? ptr yes yes yes cnt no yes yes i.m yes no no msgs.m yes yes yes myid.p0 no yes no myid.p1 no no yes

  18. Shared variable analysis • Answer: A variable x is shared iff multiple threads reference at least one instance of x • Thus: • ptr, cnt, and msgs are shared. • i and myid are NOT shared.

  19. 1 #include "csapp.h" 2 3 #define NITERS 100000000 4 void *count(void *arg); 5 6 /* shared variable */ 7 unsigned int cnt = 0; 8 Shared variable analysis 9 int main() 10 { 11 pthread_t tid1, tid2; 12 13 Pthread_create(&tid1, NULL, count, NULL); 14 Pthread_create(&tid2, NULL, count, NULL); 15 Pthread_join(tid1, NULL); 16 Pthread_join(tid2, NULL); 17 18 if (cnt != (unsigned)NITERS*2) 19 printf("BOOM! cnt=%d\n", cnt); 20 else 21 printf("OK cnt=%d\n", cnt); 22 exit(0); 23 }

  20. Shared variable analysis 24 25 /* thread routine */ 26 void *count(void *arg) 27 { 28 int i; 29 for (i=0; i<NITERS; i++) 30 cnt++; 31 return NULL; 32 }

  21. Shared variable analysis • cntshould be equal to 200,000,000. • What went wrong?! linux> badcnt BOOM! cnt=198841183 linux> badcnt BOOM! cnt=198261801 linux> badcnt BOOM! cnt=198269672

  22. Assembly code for counter loop C code for thread i for (i=0; i<NITERS; i++) cnt++;

  23. Assembly code for counter loop Asm code for thread i .L9: movl -4(%ebp),%eax #i:-4(%ebp) cmpl $99999999,%eax jle .L12 jmp .L10 .L12: movl cnt,%eax# Load leal 1(%eax),%edx # Update movl %edx,cnt# Store .L11: movl -4(%ebp),%eax leal 1(%eax),%edx movl %edx,-4(%ebp) jmp .L9 .L10: Head (Hi) Load cnt (Li) Update cnt (Ui) Store cnt (Si) Tail (Ti)

  24. Concurrent execution • Key idea: In general, any sequentially consistent interleaving is possible, but some are incorrect! • Ii denotes that thread i executes instruction I • %eaxi is the contents of %eax in thread i’s context

  25. i(thread) instri %eax1 %eax2 cnt 1 H1 - - 0 1 L1 0 - 0 1 U1 1 - 0 1 S1 1 - 1 2 H2 - - 1 2 L2 - 1 1 2 U2 - 2 1 2 S2 - 2 2 2 T2 - 2 2 1 T1 1 - 2 OK Concurrent execution

  26. i(thread) instri %eax1 %eax2 cnt 1 H1 - - 0 1 L1 0 - 0 1 U1 1 - 0 2 H2 - - 0 2 L2 - 0 0 1 S1 1 - 1 1 T1 1 - 1 2 U2 - 1 1 2 S2 - 1 1 2 T2 - 1 1 Concurrent execution (cont) • Incorrect ordering: two threads increment the counter, but the result is 1 instead of 2.

  27. Thread 2 T2 (L1, S2) S2 U2 L2 H2 Thread 1 H1 L1 U1 S1 T1 Progress graphs A progress graph depicts the discrete execution state space of concurrent threads. Each axis corresponds to the sequential order of instructions in a thread. Each point corresponds to a possible execution state (Inst1, Inst2). E.g., (L1, S2) denotes state where thread 1 has completed L1 and thread 2 has completed S2.

  28. Thread 2 T2 S2 U2 L2 H2 Thread 1 H1 L1 U1 S1 T1 Trajectories in progress graphs A trajectoryis a sequence of legal state transitions that describes one possible concurrent execution of the threads. Example: H1, L1, U1, H2, L2, S1, T1, U2, S2, T2

  29. Thread 2 T2 S2 critical section wrt cnt Unsafe region U2 L2 H2 Thread 1 H1 L1 U1 S1 T1 critical section wrt cnt Critical sections and unsafe regions L, U, and S form a critical section with respect to the shared variable cnt. Instructions in critical sections (write to some shared variable) should not be interleaved. Sets of states where such interleaving occurs form unsafe regions.

  30. Thread 2 Safe trajectory T2 S2 Unsafe trajectory Unsafe region critical section wrt cnt U2 L2 H2 Thread 1 H1 L1 U1 S1 T1 critical section wrt cnt Safe and unsafe trajectories Def: A trajectory is safe iff it doesn’t touch any part of an unsafe region. Claim: A trajectory is correct (write cnt) iff it is safe.

  31. Synchronizing with semaphores • Dijkstra's P and V operations on semaphores • semaphore: non-negative integer synchronization variable. • P(s): [ while (s == 0) wait(); s--; ] • Dutch for "Proberen" (test) • V(s): [ s++; ] • Dutch for "Verhogen" (increment)

  32. Synchronizing with semaphores • Dijkstra's P and V operations on semaphores • OS guarantees that operations between brackets [ ] are executed indivisibly. • Only one P or V operation at a time can modify s. • When while loop in P terminates, only that P can decrement s. • Semaphore invariant: (s >= 0)

  33. POSIX semaphores #include <semaphore.h> int sem_init(sem_t *sem, 0, unsigned int value); int sem_wait(sem_t *s); /* P(s) */ int sem_post(sem_t *s); /* V(s) */ #include “csapp.h” void P(sem_t *s); /* Wrapper function for sem_wait */ void V(sem_t *s); /* Wrapper function for sem_wait */

  34. Sharing with POSIX semaphores #include "csapp.h" #define NITERS 10000000 unsigned int cnt; /* counter */ sem_t sem; /* semaphore */ int main() { pthread_t tid1, tid2; Sem_init(&sem, 0, 1); /* create 2 threads and wait */ ... if (cnt != (unsigned)NITERS*2) printf("BOOM! cnt=%d\n", cnt); else printf("OK cnt=%d\n", cnt); exit(0); }

  35. Sharing with POSIX semaphores /* thread routine */ void *count(void *arg) { int i; for (i=0; i<NITERS; i++) { P(&sem); cnt++; V(&sem); } return NULL; }

  36. 1 1 1 0 0 0 0 1 Thread 2 1 1 1 0 0 0 0 1 T2 V(s) Forbidden region 0 0 0 0 -1 -1 -1 -1 S2 0 0 0 0 -1 -1 -1 -1 U2 Unsafe region 0 0 0 0 -1 -1 -1 -1 L2 -1 -1 -1 -1 0 0 0 0 1 1 0 0 0 0 1 1 P(s) 1 1 0 0 0 0 1 1 H2 P(s) V(s) H1 L1 U1 S1 T1 Thread 1 Initially s = 1 Safe sharing with semaphores Provide mutually exclusive access to shared variable by surrounding critical section with P and V operations on semaphore s (initially set to 1). Semaphore invariant creates a forbidden region that encloses unsafe region and is never touched by any trajectory.

More Related