html5-img
1 / 28

7. Message-Passing Parallel Programming with MPI

7. Message-Passing Parallel Programming with MPI. What is MPI?.

hedia
Download Presentation

7. Message-Passing Parallel Programming with MPI

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 7. Message-Passing Parallel Programmingwith MPI

  2. What is MPI? • MPI (Message-Passing Interface) is a message-passing library specification that can be used to write parallel programs for parallel computers, clusters, and heterogeneous networks. Portability across platforms is a crucial advantage of MPI. Not only can MPI programs run on any distributed-memory machine and multicomputer, but they can also run efficiently on shared-memory machines. OpenMP programs can only run efficiently on machines with hardware support for shared memory. • Like OpenMP, MPI was designed with the participation of computer vendors (IBM, Intel, Cray, Convex, Meiko, Ncube) and software houses (KAI, ParaSoft). Some universities also participated in the design of MPI.

  3. Cooperative operations • Message-passing is an approach that makes the exchange of data cooperative. Data mush both be explicitly sent and received. • An advantage is that any change in the receiver’s memory is made with the receiver’s participation. (From W. Gropp’s transparencies: Tutorial on MPI. http://www.mcs.anl.gov/mpi ).

  4. One-sided operations • One-sided operations between parallel processes include remote memory reads and writes. • An advantage is that data can be accessed without involvement of another process. (From W. Gropp’s transparencies: Tutorial on MPI. http://www.mcs.anl.gov/mpi ).

  5. Comparison • One-sided operations tend to produce programs that are easy to read. With one sided operations only the processor using the data has to participate in the communication. • Example 1: In the following code process 1 would execute the statement a=f(b,c) if a and c are in the memory of process 2. process 1: receive (2,x) y = f(b,x) send (2,y) process 2: send (1,c) receive(1,a) • In a shared-memory machine or a machine with a global address space, the code would be simpler because process 2 does not need to participate in the computation: process 1: x := get(2,c) y = f(b,x) put(2,a) := y

  6. Example 2: To execute the loop do i=1,n a(i) = x(k(i)) end do • in parallel on a message passing machine could require complex interactions between processors. • Pointer chasing across the whole machine is another case where the difficulties of message-passing become apparent.

  7. One-sided MPI programming

  8. Form of MPI programs • Typically SPMD (Single Program Multiple Data) and not MPMD • Program is sequential code in Fortran , C or C++ • All variables are local. No shared variables. • Important components: • Include files • Initialize • Identity of each process • Shutting down

  9. Include files • MPI header file #include <mpi.h> • It provides basic MPI definitions and types • Also must include all other header files needed for C (or Fortran)

  10. Initialize • Before using MPI operators must start the library int main (intargc, char *argv[]) { inti; /*local variable, one copy per process*/ … MPI_Init(&argc, &argv); … } • MPI_Initis the first MPI function called by each process • Must include argc and argv: they are needed to initialize MPI • Not necessarily first executable statement • Allows system to do any necessary setup

  11. Identity of each process • Use communicators that identifies the set of processes in which the process is working • Each communicator has a size (number of processes) • Each process has a number (or id) • MPI_COMM_WORLD • Is the default communicator • It includes all processes involved in the execution of the MPI program • Communicators represented subsets of the processes in MPI_COMM_WORLD are quite useful as we will se later on

  12. Identity of each processCommunicators • Communicator: opaque object that provides message-passing environment for processes • MPI_COMM_WORLD • Default communicator • Includes all processes • Possible to create new communicators

  13. Communicator Name Communicator (set or processes) Processes Process id’s or ranks Identity of each processCommunicator MPI_COMM_WORLD 0 5 2 1 4 3

  14. Identity of each processDetermine Number of Processes • First argument is communicator • Number of processes in the communicator is returned through second argument MPI_Comm_size (MPI_COMM_WORLD, &p);

  15. Identity of each processDetermine Process Rank • First argument is communicator • Process rank (a value 0, 1, …, p-1) returned through second argument MPI_Comm_rank (MPI_COMM_WORLD, &id);

  16. Shutting Down MPI • Call after all other MPI library calls • Allows system to free up MPI resources MPI_Finalize();

  17. “Simplisimo” MPI program (in Fortran) include ’mpif.h’ integer rank, size call MPI_INIT(ierr) call MPI_COMM_RANK(MPI_COMM_WORLD,rank) call MPI_COMM_SIZE(MPI_COMM_WORLD,size) print *,"hello world I’m ", rank," of ", size call MPI_FINALIZE(ierr) end (From W. Gropp’s transparencies: Tutorial on MPI. http://www.mcs.anl.gov/mpi ).

  18. Example output of previous program hello world I’m 1 of 20 hello world I’m 2 of 20 hello world I’m 3 of 20 hello world I’m 4 of 20 hello world I’m 5 of 20 hello world I’m 6 of 20 hello world I’m 7 of 20 hello world I’m 9 of 20 hello world I’m 11 of 20 hello world I’m 12 of 20 hello world I’m 10 of 20 hello world I’m 8 of 20 hello world I’m 14 of 20 hello world I’m 13 of 20 hello world I’m 15 of 20 hello world I’m 16 of 20 hello world I’m 19 of 20 hello world I’m 0 of 20 hello world I’m 18 of 20 hello world I’m 17 of 20 4.03u 0.43s 0:01.88e 237.2%

  19. Another simple MPI program • Given a logical formula, want to know if it ever evaluates to “true” • Known as satisfiability problem. • From Wikipedia: “ A formula is satisfiable if it is possible to find an interpretation (model) that makes the formula true” • NP complete problem. • Program from Michael J. Quinn’s (out of print) book “Parallel Programming in C with MPI and OpenMP”

  20. Satisfiability program (1 of 2) • #include <mpi.h>#include <stdio.h>int main (intargc, char *argv[]) {inti;int id;int p; void check_formula (int, int);MPI_Init (&argc, &argv);MPI_Comm_rank (MPI_COMM_WORLD, &id);MPI_Comm_size (MPI_COMM_WORLD, &p); for (i = id; i < 65536; i += p)/* print formula involving 16 input variables (216=65536) */ /* whenever the output is “true” */check_formula (id, i);printf ("Process %d is done\n", id);fflush (stdout);MPI_Finalize();}

  21. Satisfiability program (2 of 2) /* Return 1 if 'i'th bit of 'n' is 1; 0 otherwise */ #define EXTRACT_BIT(n,i) ((n&(1<<i))?1:0) void test_formula (int id, int z) { int v[16]; /* Each element is a bit of z */ inti; for (i = 0; i < 16; i++) v[i] = EXTRACT_BIT(z,i); if ((v[0] || v[1]) && (!v[1] || !v[3]) && (v[2] || v[3]) && (!v[3] || !v[4]) && (v[4] || !v[5]) && (v[5] || !v[6]) && (v[5] || v[6]) && (v[6] || !v[15]) && (v[7] || !v[8]) && (!v[7] || !v[13]) && (v[8] || v[9]) && (v[8] || !v[9]) && (!v[9] || !v[10]) && (v[9] || v[11]) && (v[10] || v[11]) && (v[12] || v[13]) && (v[13] || !v[14]) && (v[14] || v[15])) { printf ("%d) %d%d%d%d%d%d%d%d%d%d%d%d%d%d%d%d\n", id, v[0],v[1],v[2],v[3],v[4],v[5],v[6],v[7],v[8],v[9], v[10],v[11],v[12],v[13],v[14],v[15]); fflush (stdout); } }

  22. Send/Receive Operations

  23. Send Operations in MPI • The basic (blocking) send operation in MPI is MPI_SEND(buf, count, datatype, dest, tag, comm, ierror) where • (buf, count, datatype) describes countoccurrences of items of the form datatypestarting at buf. • destis the rank of the destination in the set of rocesses associated with the communicator comm. • tagis an integer to restrict receipt of the message. • ierror is the error code (From W. Gropp E. Lusk, and A. Skejellum. Using MPI. MIT Press 1996 ).

  24. Receive Operations in MPI • The receive operation has the form MPI_RECV(buf, count, datatype, source, tag, comm, status, ierror) where • countis the size of the receive buffer • sourceis the id of source process, or MPI_ANY_SOURCE • tagis a message tag, or MPI_ANY_TAG • statuscontains the source, tag, and count of the message actually received. • ierror is the error code

  25. Example: Parallel Numerical Integration (quadrature) in MPI • Each process integrates over one sub interval of [a,b] inside routine integrateand stores the results in psum. • Then sends psumto process 0. • Which receives all these partial values and puts them into tmp and accumulates them into result (res)

  26. Collective operations

  27. Broadcast and Reduce • The routine MPI_BCASTsends data from one process to allothers. • The routine MPI_REDUCEcombines data from all processes (by adding them in the example shown next), and returning the result to a single program.

  28. Example: Compute approximation to program main include ’mpif.h’ double precision PI25DT parameter (PI25DT = 3.141592653589793238462643d0) double precision mypi, pi, h, sum, x, f, a integer n, myid, numprocs, i, rc c function to integrate f(a) = 4.d0 / (1.d0 + a*a) call MPI_INIT( ierr ) call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr ) call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr ) print *, "Process ", myid, " of ", numprocs, " is alive" sizetype= 1 sumtype= 2 10 if ( myid .eq. 0 ) then write(6,98) 98 format(’Enter the number of intervals: (0 quits)’) read(5,99) n 99 format(i10) endif call MPI_BCAST(n,1,MPI_INTEGER,0,MPI_COMM_WORLD,ierr) c check for quit signal if ( n .le. 0 ) goto 30 c calculate the interval size h = 1.0d0/n sum = 0.0d0 do 20 i = myid+1, n, numprocs x = h * (dble(i) - 0.5d0) sum = sum + f(x) 20 continue mypi= h * sum c collect all the partial sums call MPI_REDUCE(mypi,pi,1,MPI_DOUBLE_PRECISION,MPI_SUM,0, $ MPI_COMM_WORLD,ierr) c node 0 prints the answer. if (myid .eq. 0) then write(6, 97) pi, abs(pi - PI25DT) 97 format(’ pi is approximately: ’, F18.16, + ’ Error is: ’, F18.16) endif goto10 30 call MPI_FINALIZE(rc) stop end

More Related