1 / 52

Parallel Programming

Parallel Programming. Types of Parallel Computers. Two principal types: 1. Single computer containing multiple processors - main memory is shared, hence called “Shared memory multiprocessor” 2. Interconnected multiple computer systems. Conventional Computer.

casper
Download Presentation

Parallel Programming

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Parallel Programming

  2. Types of Parallel Computers Two principal types: 1. Single computer containing multiple processors - main memory is shared, hence called “Shared memory multiprocessor” 2. Interconnected multiple computer systems

  3. Conventional Computer Consists of a processor executing a program stored in a (main) memory: Each main memory location located by its address. Addresses start at 0 and extend to 2b - 1 when there are b bits (binary digits) in address. Main memory Instructions (to processor) Data (to or from processor) Processor

  4. Shared Memory Multiprocessor • Extend single processor model - multiple processors connected to a single shared memory with a single address space: Memory Processors A real system will have cache memory associated with each processor

  5. Examples • Dual Pentiums • Quad Pentiums

  6. Quad Pentium Shared Memory Multiprocessor Processor Processor Processor Processor L1 cache L1 cache L1 cache L1 cache L2 Cache L2 Cache L2 Cache L2 Cache Bus interface Bus interface Bus interface Bus interface Processor/ memory b us I/O interf ace Memory controller I/O b us Memory Shared memory

  7. Programming Shared Memory Multiprocessors • Threads - programmer decomposes program into parallel sequences (threads), each being able to access variables declared outside threads. Example: Pthreads • Use sequential programming language with preprocessor compiler directives, constructs, or syntax to declare shared variables and specify parallelism. Examples: OpenMP (an industry standard), UPC (Unified Parallel C) -- needs compilers.

  8. Parallel programming language with syntax to express parallelism. Compiler creates executable code -- not now common. • Use parallelizing compiler to convert regular sequential language programs into parallel executable code - also not now common.

  9. Message-Passing Multicomputer Complete computers connected through an interconnection network: Interconnection network Messages Processor Local memory Computers

  10. Dedicated cluster with a master node External network User Cluster 2nd Ethernet interface Master node Switch Ethernet interface Compute nodes

  11. Programming Clusters • Usually based upon explicit message-passing. • Common approach -- a set of user-level libraries for message passing. Example: • Parallel Virtual Machine (PVM) - late 1980’s. Became very popular in mid 1990’s. • Message-Passing Interface (MPI) - standard defined in 1990’s and now dominant.

  12. MPI(Message Passing Interface) • Message passing library standard developed by group of academics and industrial partners to foster more widespread use and portability. • Defines routines, not implementation. • Several free implementations exist.

  13. MPI designed: • To address some problems with earlier message-passing system such as PVM. • To provide powerful message-passing mechanism and routines - over 126 routines (although it is said that one can write reasonable MPI programs with just 6 MPI routines).

  14. Message-Passing Programming using User-level Message Passing Libraries Two primary mechanisms needed: 1. A method of creating separate processes for execution on different computers 2. A method of sending and receiving messages

  15. Multiple program, multiple data (MPMD) model Source Source fi le fi le Executable Processor 0 Processor p - 1

  16. Single Program Multiple Data (SPMD) model. Different processes merged into one program. Control statements select different parts for each processor to execute. Source fi le Basic MPI way Executables Processor 0 Processor p 1 -

  17. Process 1 Star t e x ecution of process 2 spawn(); Process 2 Time Multiple Program Multiple Data (MPMD) Model Separate programs for each processor. One processor may execute as master process. Other processes started from within master process - dynamic process creation. Can be done with MPI version 2

  18. Communicators • Defines scope of a communication operation. • Processes have ranks associated with communicator. • Initially, all processes enrolled in a “universe” called MPI_COMM_WORLD, and each process is given a unique rank, a number from 0 to p - 1, with p processes. • Other communicators can be established for groups of processes.

  19. Using SPMD Computational Model main (int argc, char *argv[]) { MPI_Init(&argc, &argv); . . MPI_Comm_rank(MPI_COMM_WORLD,&myrank); /*find rank */ if (myrank == 0) master(); else slave(); . . MPI_Finalize(); } where master() and slave() are to be executed by master process and slave process, respectively.

  20. Process 1 Process 2 x y Mo v ement of data send(&x, 2); recv(&y, 1); Generic syntax (actual formats later) Basic “point-to-point”Send and Receive Routines Passing a message between processes using send() and recv() library calls:

  21. Message Tag • Used to differentiate between different types of messages being sent. • Message tag is carried within message. • If special type matching is not required, a wild card message tag is used, so that the recv() will match with any send().

  22. Process 1 Process 2 x y Mo v ement of data send(&x,2, 5 ); recv(&y,1, 5 ); W aits f or a message from process 1 with a tag of 5 Message Tag Example To send a message, x, with message tag 5 from a source process, 1, to a destination process, 2, and assign to y:

  23. Synchronous Message Passing Routines return when message transfer completed. Synchronous send routine • Waits until complete message can be accepted by the receiving process before sending the message. Synchronous receive routine • Waits until the message it is expecting arrives.

  24. Synchronous send() and recv() using 3-way protocol Process 1 Process 2 Time Request to send send(); Suspend Ac kno wledgment recv(); process Both processes Message continue (a) When occurs before send() recv()

  25. Synchronous send() and recv() using 3-way protocol Process 1 Process 2 Time recv(); Suspend Request to send process send(); Message Both processes continue Ac kno wledgment (b) When occurs before recv() send()

  26. Synchronous routines intrinsically perform two actions: • They transfer data and • They synchronize processes.

  27. Asynchronous Message Passing • Routines that do not wait for actions to complete before returning. Usually require local storage for messages. • In general, they do not synchronize processes but allow processes to move forward sooner. Must be used with care.

  28. MPI Blocking and Non-Blocking • Blocking - return after their local actions complete, though the message transfer may not have been completed. • For example, returns before message written to OS buffer. • Non-blocking - return immediately. Assumes that data storage used for transfer not modified by subsequent statements prior to being used for transfer, and it is left to the programmer to ensure this.

  29. How message-passing routines return before message transfer completed Message buffer needed between source and destination to hold message: Process 1 Process 2 Message b uff er Time send(); Contin ue recv(); Read process message b uff er

  30. Asynchronous routines changing to synchronous routines • Buffers only of finite length and a point could be reached when send routine held up because all available buffer space exhausted. • Then, send routine will wait until storage becomes re-available - i.e then routine behaves as a synchronous routine.

  31. MPI_Send(buf, count, datatype, dest, tag, comm) Datatype of each item Message tag Address of send buffer Number of items to send Rank of destination process Communicator Parameters of MPI blocking send

  32. Parameters of MPI blocking receive MPI_Recv(buf,count,datatype,dest,tag,comm,status) Datatype of each item Message tag Address of receive buffer Max. number of items to receive Rank of source process Communicator Status after operation

  33. Example To send an integer x from process 0 to process 1, MPI_Comm_rank(MPI_COMM_WORLD,&myrank); /* find rank */ if (myrank == 0) { int x; MPI_Send(&x,1,MPI_INT,1,msgtag,MPI_COMM_WORLD); } else if (myrank == 1) { int x; MPI_Recv(&x,1,MPI_INT,0,msgtag,MPI_COMM_WORLD,status); }

  34. MPI Nonblocking Routines • Nonblocking send - MPI_Isend() - will return “immediately” even before source location is safe to be altered. • Nonblocking receive - MPI_Irecv() - will return even if no message to accept.

  35. Detecting when message receive if sent with non-blocking send routine Completion detected by MPI_Wait() and MPI_Test(). MPI_Wait() waits until operation completed and returns then. MPI_Test() returns with flag set indicating whether operation completed at that time. Need to know which particular send you are waiting for. Identified with request parameter.

  36. Example To send an integer x from process 0 to process 1 and allow process 0 to continue, MPI_Comm_rank(MPI_COMM_WORLD, &myrank);/* find rank */ if (myrank == 0) { int x; MPI_Isend(&x,1,MPI_INT,1,msgtag,MPI_COMM_WORLD, req1); compute(); MPI_Wait(req1, status); } else if (myrank == 1) { int x; MPI_Recv(&x,1,MPI_INT,0,msgtag, MPI_COMM_WORLD, status); }

  37. Collective message passing routines Have routines that send message(s) to a group of processes or receive message(s) from a group of processes Higher efficiency than separate point-to-point routines although not absolutely necessary.

  38. Broadcast Sending same message to a group of processes.(Sometimes “Multicast” - sending same message to defined group of processes, “Broadcast” - to all processes.) Process 0 Process 1 Process p - 1 data data data Action buf MPI_bcast(); MPI_bcast(); MPI_bcast(); Code MPI form

  39. MPI Broadcast routine • int MPI_Bcast(void *buf, int count, MPI_Datatype datatype, int root, MPI_Comm comm) • Actions: • Broadcasts message from root process to al • l processes incommand itself. • Parameters: • *bufmessage buffer • count number of entries in buffer • datatypedata type of buffer • rootrank of root

  40. Scatter Sending each element of an array in root process to a separate process. Contents of ith location of array sent to ith process. Process 0 Process 1 Process p 1 - data data data Action buf MPI_scatter(); MPI_scatter(); MPI_scatter(); Code MPI form

  41. Gather Having one process collect individual values from set of processes. Process 0 Process 1 Process p 1 - data data data Action buf MPI_gather(); MPI_gather(); MPI_gather(); Code MPI form

  42. Reduce Gather operation combined with specified arithmetic/logical operation. Example: Values could be gathered and then added together by root: Process 0 Process 1 Process p 1 - data data data Action buf + MPI_reduce(); MPI_reduce(); MPI_reduce(); Code MPI form

  43. Collective Communication Involves set of processes, defined by an intra-communicator. Message tags not present. Principal collective operations: • MPI_Bcast() - Broadcast from root to all other processes • MPI_Gather() - Gather values for group of processes • MPI_Scatter() - Scatters buffer in parts to group of processes • MPI_Alltoall() - Sends data from all processes to all processes • MPI_Reduce() - Combine values on all processes to single value • MPI_Reduce_scatter() - Combine values and scatter results • MPI_Scan() - Compute prefix reductions of data on processes

  44. Example To gather items from group of processes into process 0, using dynamically allocated memory in root process: int data[10]; /*data to be gathered from processes*/ MPI_Comm_rank(MPI_COMM_WORLD, &myrank);/* find rank */ if (myrank == 0) { MPI_Comm_size(MPI_COMM_WORLD,&grp_size);/*find size*/ /*allocate memory*/ buf = (int *)malloc(grp_size*10*sizeof (int)); } MPI_Gather(data,10,MPI_INT,buf,grp_size*10,MPI_INT,0, MPI_COMM_WORLD); MPI_Gather() gathers from all processes, including root.

  45. Sample MPI program #include “mpi.h” #include <stdio.h> #include <math.h> #define MAXSIZE 1000 void main(int argc, char *argv) { int myid, numprocs; int data[MAXSIZE], i, x, low, high, myresult, result; char fn[255]; char *fp; MPI_Init(&argc,&argv);//required MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_rank(MPI_COMM_WORLD,&myid);

  46. Sample MPI program if (myid == 0) { /* Open input file and initialize data */ strcpy(fn,getenv(“HOME”)); strcat(fn,”/MPI/rand_data.txt”); if ((fp = fopen(fn,”r”)) == NULL) { printf(“Can’t open the input file: %s\n\n”, fn); exit(1); } for(i = 0; i < MAXSIZE; i++) fscanf(fp,”%d”, &data[i]); } //All processes execute the rest of code. MPI_Bcast(data, MAXSIZE, MPI_INT, 0, MPI_COMM_WORLD); /* broadcast data */

  47. Sample MPI program x = n/nproc; /* Add my portion Of data */ low = myid * x; high = low + x; for(i = low; i < high; i++) myresult += data[i]; printf(“I calculated %d from %d\n”, myresult, myid); /* Compute global sum */ MPI_Reduce(&myresult, &result, 1, MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD); if (myid == 0) printf(“The sum is %d.\n”, result); MPI_Finalize(); }

  48. Message-Passing on a Grid • VERY expensive, sending data across network costs millions of cycles • Bandwidth shared with other users • Links unreliable

  49. Computational Strategies • As a computing platform, a grid favors situations with absolute minimum communication between computers.

  50. Strategies With no/minimum communication: • “Embarrassingly Parallel” Computations • those computations which obviously can be divided into parallel independent parts. Parts executed on separate computers. • Separate instance of the same problem executing on each system, each using different data

More Related