1 / 23

Message Passing Programming with MPI

Message passing programming. Separate (collaborative) processes are running.No difference between on the same processor or on different processors in terms of correctness.Notice the difference from the OpenMP model.These processes interact by exchanging information.In the message passing model,

xandy
Download Presentation

Message Passing Programming with MPI

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. Message Passing Programming with MPI Some concepts in message passing programming Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s MPI tutorial at http://www.mcs.anl.gov/mpi/tutorial/

    2. Message passing programming Separate (collaborative) processes are running. No difference between on the same processor or on different processors in terms of correctness. Notice the difference from the OpenMP model. These processes interact by exchanging information. In the message passing model, exchanging information is done explicitly by calling communication routines.

    3. Message passing program execution model MIMD (multiple instruction multiple data) model allows different processes. a0.out for process 0, a1.out for process 1, … ai.out for process i. We probably want to do this in most cases, however, this will cause a lot of problems. 1000 executables for 1000 processes run, not scalable. Must specify which process gets which executable, execution interface can be painful. …… SPMD (single program multiple data) model uses one executable All processes run a.out. More closely matching the execution of a sequential program Most systems follow this model.

    4. SPMD (single program multiple data) model uses one executable Most of platforms follow this model. MIMD model can be realized by the SPMD model In a.out: If (myid == 0) a0.out If (myid == 1) a1.out …… One can usually do better since most processes in a message passing program do similar things. We will only discuss SPMD message passing programs.

    5. Issues in running a message passing program On which processor each process is executed? For the SPMD model, the pool of processors (machines) must be specified. In the ideal case when all machines are the same (Same memory, same CPU, same network connectivity), specifying the pool is sufficient. In a typical parallel environment, the processors (machines) are the same, but the network connectivity is usually not the same.

    7. Issues in running a message passing program: The mapping between processes and processors can be very important to the performance. The best mapping is usually platform-specific. Summary: To a run a message passing program The processes (SPMD) The processors (platform) The mapping. Performance issue comes up even at this level. Average programmers do not understand this issue.

    8. Even at the very high level (e.g. process mapping), the message passing paradigm has a lot more subtlety than sequential or shared memory programming paradigm. Message passing paradigm is targeted for writing efficient and scalable programs for scalable platforms (> 10000 processing elements). In many cases, anything that affects performance must be considered explicitly.

    9. Message passing mechanism Two types of communications Cooperative communications All parties agree to the communication One sided communications Only one process is involved in the communication.

    10. Cooperative communications Data must be explicitly send and received. Communications in TCP/IP socket programming belong to this type (Sender must write, receiver must read).

    11. One sided communications Can either be remote read or remote write. Only one side is involved in the operations. Data is accessed without waiting for another process.

    12. Message Passing Interface (MPI) MPI is an industrial standard that specifies library routines needed for writing message passing programs. Mainly communication routines Also include other features such as topology. MPI allows the development of portable message passing programs. It is a standard supported pretty much by everybody in the field.

    13. MPI is both simple and complex. Almost all MPI programs can be realized with six MPI routines. MPI has a total of more than 100 functions and a lot of concepts. We will mainly discuss the simple MPI, but we will also give a glimpse of the complex MPI. MPI is about just the right size. One has the flexibility when it is required. One can start using it after learning the six routines.

    14. The hello world MPI program #include "mpi.h" #include <stdio.h> int main( int argc, char *argv[] ) { MPI_Init( &argc, &argv ); printf( "Hello world\n" ); MPI_Finalize(); return 0; }

    15. Compiling, linking and running MPI programs MPICH2 is installed on linprog To run a MPI program, do the following: Create a file called .mpd.conf in your home directory with content ‘secretword=clusterxxx’, where xxx is whatever number Create a file ‘hosts’ specifying the machines to be used to run MPI programs (see linprog_hosts). Boot the system ‘mpdboot –n 3 –f hosts’ Check if the system is corrected setup: ‘mpdtrace’ Compile the program: ‘mpicc hello.c’ Run the program: mpiexec –machinefile hostmap –n 4 a.out Hostmap specifies the mapping -n 4 says running the program with 4 processes. Exit MPI: mpdallexit

    16. More on MPI MPICH uses the SPMD model. In order for an SPMD program to emulate MIMD code, the program must know its execution environment How many processes are working on this problem? MPI_Comm_size What is myid? MPI_Comm_rank Rank is with respect to a communicator (context of the communication). MPI_COM_WORLD is a predefined communicator that includes all processes (already mapped to processors). See example2.c.

    17. Sending and receiving messages Questions to be answered: To who are the data sent? What is sent? How does the receiver identify the message

    18. Socket programming: To who is the data sent? Socket id What is sent? Buffer pointer, continuous byte stream How does the receiver identify the message? Socket id+message order What a programmer typically wants? Identify the peer by a peer id, not a socket id. Send/receive data with flexible types, user defined types Identify messages by sender id, message order, or tag

    19. Send and receive routines in MPI MPI_Send and MPI_Recv (blocking send/recv) Identify peer: Peer rank (peer id) Specify data: Starting address, datatype, and count. An MPI datatype is recursively defined as: predefined, corresponding to a data type from the language (e.g., MPI_INT, MPI_DOUBLE) a contiguous array of MPI datatypes a strided block of datatypes an indexed array of blocks of datatypes an arbitrary structure of datatypes There are MPI functions to construct custom datatypes, in particular ones for subarrays Identifying message: peer + tag

    20. MPI blocking send MPI_Send(start, count, datatype, dest, tag, comm) The message buffer is described by (start, count, datatype). The target process is specified by dest, which is the rank of the target process in the communicator comm. When this function returns, the data has been delivered to the system and the buffer can be reused. The message may not have been received by the target process.

    21. MPI blocking receive MPI_Recv(start, count, datatype, source, tag, comm, status) Waits until a matching (both source and tag) message is received from the system, and the buffer can be used source is rank in communicator specified by comm, or MPI_ANY_SOURCE (a message from anyone) tag is a tag to be matched on or MPI_ANY_TAG receiving fewer than count occurrences of datatype is OK, but receiving more is an error (result undefined) status contains further information (e.g. size of message, rank of the source) See example3a.c for the use of MPI_Send and MPI_Recv.

    22. The PI program The summation can be performed in a parallel fashion See example3a.c

    23. The Simple MPI (six functions that make most of programs work): MPI_INIT MPI_FINALIZE MPI_COMM_SIZE MPI_COMM_RANK MPI_SEND MPI_RECV Only MPI_Send and MPI_Recv are non-trivial. Everything that can be done by TCP/IP socket programming or UNIX pipe can be realized by these six functions in a more concise manner.

More Related