1 / 13

Chapter 5

Chapter 5. Nonblocking Communication. MPI_Send, MPI_Recv are blocking operations Will not return until the arguments to the functions can be safely modified by subsequent statements in the program

carrr
Download Presentation

Chapter 5

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 5

  2. Nonblocking Communication • MPI_Send, MPI_Recv are blocking operations • Will not return until the arguments to the functions can be safely modified by subsequent statements in the program • MPI_Send: the message envelop has been created and the message has been sent or the contents of the message have been copied into a system buffer • MPI_Recv: the message has been received into the buffer specified by the buffer argument

  3. Nonblocking Communication • The resources available to the sending or receiving process are not being fully utilized • The send operation should be able to proceed concurrently with some computation, as long as the computation doesn’t modify any of the arguments to the send operation • For the receiving process, if the data to be received is not yet available, the process should be able to continue with useful computation as long as it doesn’t interfere with the arguments to the receive • Nonblocking communication is explicitly designed to meet these needs

  4. Nonblocking Communication • A call to a nonblocking send or receive simply starts, or posts, the communication operation • Then up to the user program to explicitly complete the communication at some later points in the program • Any nonblocking operation requires a minimum of two function calls: a call to start the operation and a call to complete the operation • The basic functions in MPI for starting nonblocking communication are MPI_Isend and MPI_Irecv. • “I” stands for “immediate”, return immediately

  5. MPI_Isend and MPI_Irecv • Prototypes for MPI_Isend and MPI_Irecv functions MPI_Isend(void* buffer, int count, MPI_Datatype datatype, int destination, int tag, MPI_Comm communicator, MPI_Request* request) MPI_Irecv(void* buffer, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm communicator, MPI_Request* request)

  6. MPI_Isend and MPI_Irecv • The common parameters with MPI_Send and MPI_Recv have the same meaning • The semantics are different • Both calls only start the operation • MPI_Isend: the system has been informed that it can start copying data out of the send buffer (either to a system buffer or to the destination) • MPI_Irecv: the system has been informed that it can start copying data into the buffer • Neither send nor receive buffers should be modified until the operations are explicitly completed or canceled

  7. MPI_Isend and MPI_Irecv • The request parameter is a handle associated to an opaque object • The object referenced by request is a system defined, and it cannot be directly accessed by the user • Its purpose is to identify the operation started by the nonblocking call • It will contain information on such things as the source or destination, the tag, the communicator, and the buffer • When the nonblocking operation is completed, the request initialized by the call to MPI_Isend or MPI_Irecv is used to identify the operation to be completed

  8. MPI_Wait • There are a variety of functions that MPI uses to complete nonblocking operations. • The simplest one is MPI_Wait. • It can be used to complete any nonblocking operation • Its prototype is MPI_Wait ( MPI_Request* request, MPI_Status* status) • The request corresponds to that returned by MPI_Isend or MPI_Irecv. • MPI_Wait blocks until the operation identified by request completes • If it was a send, either the message has been sent or buffered by the system • If it was a receive, the message has been copied into the receive buffer

  9. MPI_Wait • When MPI_Wait returns, request is set to MPI_REQUEST_NULL • It means that there is no pending operation associated to request • If the call to MPI_Wait is used to complete an operation started by MPI_Irecv, the information returned in the status parameter is the same as the information returned in status by a call to MPI_Recv • It is perfectly legal to match blocking operations with nonblocking operations. • A message sent with MPI_Isend can be received by a call to MPI_Recv

  10. Example 1 #include <stdio.h> #include <mpi.h> int main(int argc,**argv) { int myid, nprocs; int buffer; MPI_Status status; MPI_Request request; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_rank(MPI_COMM_WORLD,&myid); request=MPI_REQUEST_NULL; if(myid == 0){ buffer=1234; MPI_Isend(&buffer,1,MPI_INT,1,1,MPI_COMM_WORLD,&request); } if(myid == 1){ MPI_Irecv(&buffer,1,MPI_INT,0,1,MPI_COMM_WORLD,&request); } MPI_Wait(&request,&status); if(myid == 0){ printf("Processor %d sent %d\n",myid,buffer); } if(myid == 1){ printf("Processor %d got %d\n",myid,buffer); } MPI_Finalize(); }

  11. Example 1 Processor 0 sent 1234 Processor 1 got 1234

  12. Example 2 #include <mpi.h> #include <stdio.h> int main(int argc, char **argv) { int my_rank, nprocs; int left, right; int received=-1; int tag = 1; MPI_Status statSend, statRecv; MPI_Request reqSend, reqRecv; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &my_rank); MPI_Comm_size(MPI_COMM_WORLD, &nprocs); left = (my_rank-1 + nprocs)%nprocs; right = (my_rank+1)%nprocs; MPI_Isend(&my_rank,1,MPI_INT,left,tag,MPI_COMM_WORLD,&reqSend); MPI_Irecv(&received,1,MPI_INT,right,tag,MPI_COMM_WORLD,&reqRecv); MPI_Wait(&reqSend, &statSend); MPI_Wait(&reqRecv, &statRecv); printf("Totally %d processors, processor %d received from right neighbor processor: %d\n", nprocs, my_rank, received); MPI_Finalize(); return 0; }

  13. Example 2 Totally 8 processors, processor 7 received from right neighbor processor: 0 Totally 8 processors, processor 6 received from right neighbor processor: 7 Totally 8 processors, processor 3 received from right neighbor processor: 4 Totally 8 processors, processor 5 received from right neighbor processor: 6 Totally 8 processors, processor 4 received from right neighbor processor: 5 Totally 8 processors, processor 0 received from right neighbor processor: 1 Totally 8 processors, processor 1 received from right neighbor processor: 2 Totally 8 processors, processor 2 received from right neighbor processor: 3

More Related