1 / 25

CS 591x – Cluster and Parallel Programming

CS 591x – Cluster and Parallel Programming. Nonblocking communications. Remember…. Its about performance. Blocking vs. Nonblocking Communications. Recall that MPI_Send & MPI_Recv (and others) are blocking operations

Download Presentation

CS 591x – Cluster and Parallel Programming

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 591x – Cluster and Parallel Programming Nonblocking communications

  2. Remember… • Its about performance

  3. Blocking vs. Nonblocking Communications • Recall that MPI_Send & MPI_Recv (and others) are blocking operations • In blocking communications the communications process must complete before program execution can proceed. • Clearly, MPI_Recv must block • MPI_Recv(a,1,MPI_INT,next,tag, spcomm) • When is a safe? • … When MPI_Recv finishes

  4. Blocking vs. Nonblocking Communications • MPI_Send also blocks • Blocks until the message is received • or until the message is in a system buffer • Consider… • MPI_Send(b,1,MPI_INT,dest,tag,spcomm) • When is b safe to change? • When MPI_Send is completed and program execution continues

  5. Blocking vs. Nonblocking Communications • Communications takes time • Time means compute cycles • …compute cycle that might be used for some other computations • …better performance in our application if we could initiate a communcations… • do something else useful while it is in progress • and check back when it is done.

  6. Nonblocking communications • That’s the idea behind nonblocking communication… • Initiate a communications transaction • Do something else for while… • …but don’t mess with the variables involved in the transaction • check to see if the transaction is finish • proceed with computation using the results of the transaction

  7. Nonblocking Communications Type MPI_Request request; **This is used to keep track of the transaction

  8. Nonblocking Send int MPI_Isend( void* message, int count, MPI_Datetype type, int dest, int tag, MPI_Comm comm, MPI_Request* request)

  9. Nonblocking Recv int MPI_Irecv( void* message, int count, MPI_Datatype type, int dest, int tag, MPI_Comm comm, MPI_Request request)

  10. Nonblocking Send/Recv • Note: the arguments are the same as blocking Send/Recv … • except for the inclusion of the request argument • The request argument is known as the transactions “handle”

  11. Nonblocking Send/Recv • So how do we know when the transaction is complete?

  12. MPI_Wait int MPI_Wait( MPI_Request request, MPI_Status status)

  13. MPI_Wait • MPI_Wait stops program execution until the communication transaction • … identified by the request handle • … complete • … then the application execution proceeds

  14. So something like this… MPI_Request request1; MPI_Isend(a,1,MPI_INT, dest, tag,mycomm, &request1); … // do other stuff here; …. MPI_Wait(&request1, &status); ….

  15. Or something like this MPI_Request request1; MPI_Request request2; MPI_Isend(a,1,MPI_INT,dest,tag,comm1, &request1); MPI_Irecv(b,1,MPI_INT,src,tag,comm1, &request2); … other stuff… MPI_Wait(&request1, &status); MPI_Wait(&request2, &status);

  16. MPI_Test int MPI_Test( MPI_Request* request, int* flag, MPI_Status status);

  17. MPI_Test • Tests to determine if the transaction identified by the request handle has completed… • Unlike MPI_Wait, it does not stop program execution

  18. MPI_Test … something like this… MPI_Request request1; MPI_Isend(a,1,MPI_INT,dest,tag,mycomm, &request1); MPI_Test(&request1, &flag, &status); if (flag == 1) { code that executed when the transaction has completed } else { code that executes when the transaction has not completed};

  19. Let’s revisit Request Handles • You can store multiple request handles in an array… MPI_Request req[4]; ** which means you can treat them as a set

  20. Request Handle Arrays MPI_Request recreq[4]; MPI_Status status[4]; …. MPI_Irecv(&a[0,0],4,MPI_INT,src0,0,comm,&recreq[0]); MPI_Irecv(&a[1,0],4,MPI_INT,src1,1,comm,&recreq[1]); MPI_Irecv(&a[2,0],4,MPI_INT,src2,2,comm,&recreq[2]); MPI_Irecv)&a[3,0],4,MPI_INT,src3,3,comm,&recreq[3]); …. //do other stuff… … MPI_Waitall(4, recreq, status); …//continue execution

  21. MPI_Wait… int MPI_Waitall( int req_array_size, MPI_Request req_array[], MPI_Status stat_array[]); *** Wait for all transactions in req_array to complete

  22. MPI_Wait… int MPI_Waitany( int array_size, MPI_Request req_array[], int* completed, MPI_Status stat); Waits for any one transaction in req_array to complete

  23. MPI_Wait… int MPI_Waitsome( int array_size, MPI_Request req_array[] int* complete_count, int indices[], MPI_Status stat[]) Waits for at least one (can be more) transactions in req_array to complete

  24. MPI_Test… MPI_Testall --- tests to see if all transaction in list[] have completed MPI_Testany – tests to see if at least one transaction in list[] have completed MPI_Testsome – tests to see which of the transactions in list[] have completed **note: argument list similar to MPI_Wait counterpart, but includes a flag or flag[] variable

More Related