1 / 29

Introduction to Concurrency

Introduction to Concurrency. Concurrency: Execute two or more pieces of code “at the same time” Why? No choice: - Geographically distributed data - Interoperability of different machines - A piece of code must”serve” many other client processes - To achieve reliability

Download Presentation

Introduction to Concurrency

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Concurrency Concurrency:Execute two or more pieces of code “at the same time” Why?No choice:- Geographically distributed data- Interoperability of different machines- A piece of code must”serve” many other client processes- To achieve reliability By Choice: to achieve speedup sometimes makes programming easier (e.g. UNIX pipes)‏

  2. Possibilities for Concurrency Architecture: Architecture: • Uniprocessor with: • I/O channel • I/o processor • DMA Multiprogramming, multiple process system Programs Network of uniprocessors Distributed programming Multiple CPU’s Parallel programming

  3. Definitions Concurrent process execution can be:- interleaved,or- physically simultaneousInterleaved:Multiprogramming on uniprocessor Physically simultaneous: Uni-or multiprogramming on multiprocessor Process, thread, or task: Schedulable unit of computation Granularity:Process “size” or computation to communication ratio- Too small: excessive overhead- Too large: less concurrency

  4. Precedence Graph Consider writing a program as a set of tasks. Precedence graph: Specifies execution ordering among tasks S0: A:= X – Y S1: B:= X + Y S2: C := Z + 1 S3: C := A - B S4: W:= C + 1 S0 S2 S1 S3 S4 Parallel zing compilers for computers with vector processors build dependency graphs

  5. Cyclic Precedence Graphs What does the following graph represent ? S1 S2 S3

  6. Examples of Concurrency in Uniprocessors • Example 1: Unix pipes • Motivations: • fast to write code • Fast to execute • Example2: Buffering • Motivation: • required when two asynchronous processes must communicate • Example3: Client/ Server model • Motivation: • - geographically distributed computing

  7. Operating System Issues Synchronization:What primitives should OS provide? Communication:What primitives should OS provide to interface communication protocol? Hardware support:Needed to implement OS primitives Remote execution:What primitives should OS provide?- Remote procedure call(RPC)- Remote command shell Sharing address spaces:Makes programmer easier Lightweight threads:Can a process creation be as cheap as a procedure call?

  8. Parallel Language Constructs FORK and JOIN FORK LStarts parallel execution at the statement labeled L and at the statement following the fork JOIN CountRecombines ‘Count’ concurrent computations Count:=Count-1; If (Count>0) Then Quit; Join is an atomic operation

  9. Definition: Atomic Operation • If I am a process executing on a processor, and I execute an atomic operation, then all other processes executing on this or any other processor: • Can see state of system before I execute or after I execute • but cannot see any intermediate state while I am executing • Example: bank teller/* Joe has $1000, split equally between savings and checking accounts*/ • Subtract $100 from Joe’s savings account • Add $100 to Joe’s checking account • Other processes should never read Joe’s balances and find he has 900 in both accounts.

  10. Concurrency Conditions Let Si denote a statement Read set of Si: R(Si) = {a1,a2,…,an)‏ Set of all variables referenced in Si Write set of Si: W(Si) = { b1, b2, …, bm}, Set of all variables changed by si C := A - B R(C := A - B) = {A , B} W(C := A - B) = {C} Scanf(“%d” , &A)‏ R(scanf(“%d” , &A))={} W(scanf(“%d” , &A))={A}

  11. Bernstein’s Conditions The following conditions must hold for two statements S1 and S2 to execute concurrently with valid results: R(S1) INTERSECT W(S2)={} W(S1) INTERSECT R(S2)={} W(S1) INTERSECT W(S2)={} These are called the Berstein Conditions.

  12. Fork and Join Example #1 S1 S2 S1: A := X + Y S2: B := Z + 1 S3: C := A - B S4: W :=C + 1 S3 Count := 2; FORK L1; A := X + Y; Goto L2; L1: B := Z + 1; L2: JOIN Count; C := A - B; W := C + 1; S4

  13. Structured Parallel Constructs PARBEGIN / PAREND PARBEGIN Sequential execution splits off into several concurrent sequences PAREND Parallel computations merge PARBEGIN Statement 1; Statement 2; :::::::::::::::: Statement Nl PAREND; PARBEGIN Q := C mod 25; Begin N := N - 1; T := N / 5; End; Proc1(X , Y); PAREND;

  14. Fork and Join Example #2 S1 S1; Count := 3; FORK L1; S2; S4; FORK L2; S5; Goto L3; L2: S6; Goto L3 L1: S3; L3: JOIN Count S7; S2 S3 S4 S6 S5 S7 Up to three tasks may concurrently execute

  15. Parbegin / Parend Examples Begin PARBEGIN A := X + Y; B := Z + 1; PAREND; C := A - B; W := C + 1; End Begin S1; PARBEGIN S3 BEGIN S2; S4; PARBEGIN S5; S6; PAREND; End; PAREND; S7; End;

  16. Comparison Unfortunately, the structured concurrent statement is not powerful enough to model all precedence graphs. S1 S1 S1 S1 S1 S1 S1

  17. Comparison(contd)‏ Fork and Join code for the modified precedence graph: S1; Count 1:=2; FORK L1; S2; S4; Count2:=2; FORK L2; S5 Goto L3; L1: S3; L2: JOIN Count1; S6; L3: JOIN Count2; S7;

  18. Comparison(cont’d)‏ There is no corresponding structured construct code for the same graph However, other synchronization techniques can supplement Also, not all graphs need implementing for real-world problems

  19. Overview System Calls - fork( )‏ - wait( )‏ - pipe( )‏ - write( )‏ - read( )‏ Examples

  20. Process Creation Fork( )‏ NAME fork() – create a new process SYNOPSIS # include <sys/types.h> # include <unistd.h> pid_t fork(void)‏ RETURN VALUE success parent- child pid child- 0 failure -1

  21. Fork() system call- example #include <sys/types.h> #include <unistd.h> #include <stdio.h> Main()‏ { printf(“[%ld] parent process id: %ld\n”, getpid(), getppid()); fork(); printf(“\n[%ld] parent process id: %ld\n”, getpid(), getppid()); }

  22. Fork() system call- example [17619] parent process id: 12729 [17619] parent process id: 12729 [2372] parent process id: 17619

  23. Fork()- program structure #include <sys/types.h> #include <unistd.h> #include <stdio.h> Main()‏ { pid_t pid; if((pid = fork())>0){ /* parent */ } else if ((pid==0){ /*child*/ } else { /* cannot fork* } exit(0); }

  24. Wait() system call Wait()- wait for the process whose pid reference is passed to finish executing SYNOPSIS #include<sys/types.h> #include<sys/wait.h> pid_t wait(int *stat)loc)‏ The unsigned decimal integer process ID for which to wait RETURN VALUE success- child pid failure- -1 and errno is set

  25. Wait()- program structure #include <sys/types.h> #include <unistd.h>#include <stdlib.h> #include <stdio.h> Main(int argc, char* argv[])‏ { pid_t childPID; if((childPID = fork())==0){ /*child*/ } else { /* parent* wait(0); } exit(0); }

  26. Pipe() system call Pipe()- to create a read-write pipe that may later be used to communicate with a process we’ll fork off. SYNOPSIS Int pipe(pfd)‏ int pfd[2]; PARAMETER Pfd is an array of 2 integers, which that will be used to save the two file descriptors used to access the pipeRETURN VALUE:0 – success;-1 – error.

  27. Pipe() - structure /* first, define an array to store the two file descriptors*/Int pipe[2];/* now, create the pipe*/int rc = pipe (pipes); if(rc = = -1) { /* pipe() failed*/ Perror(“pipe”); exit(1);} If the call to pipe() succeeded, a pipe will be created, pipes[0] will contain the number of its read file descriptor, and pipes[1] will contain the number of its write file descriptor.

  28. Write() system call Write() – used to write data to a file or other object identified by a file descriptor. SYNOPSIS #include <sys/types.h> Size_t write(int fildes, const void * buf, size_t nbyte); PARAMETER fildes is the file descriptor, buf is the base address of area of memory that data is copied from, nbyte is the amount of data to copy RETURN VALUE The return value is the actual amount of data written, if this differs from nbyte then something has gone wrong

  29. Read() system call Read() – read data from a file or other object identified by a file descriptor SYNOPSIS #include <sys/types.h> Size_t read(int fildes, void *buf, size_t nbyte); ARGUMENT fildes is the file descriptor, buf is the base address of the memory area into which the data is read, nbyte is the maximum amount of data to read. RETURN VALUE The actual amount of data read from the file. The pointer is incremented by the amount of data read.

More Related