1 / 10

Collective Communication with MPI

Collective Communication with MPI. Hector Urtubia. Introduction. Collective communication is a communication pattern that involves all processes in a communicator. For collective communication, important optimizations can be done.

catkins
Download Presentation

Collective Communication with MPI

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Collective Communication with MPI Hector Urtubia

  2. Introduction • Collective communication is a communication pattern that involves all processes in a communicator. • For collective communication, important optimizations can be done. • The MPI API defines some common collective communications functions.

  3. Frequently Used Collective Operations • Broadcast. • One process transmits to others. • Reduce • Many processes transmit to one. • Scatter • Distributes data among processes. • Gather • Gathers data stored in many processes.

  4. Broadcast • Is a communication pattern where a single process transmits the same data to many processes. int MPI_Bcast(void* message, int count, MPI_Datatype datatype, int root, MPI_Comm comm);

  5. Reduce • Collective communication where all processes contribute data and is combined using a binary operation. int MPI_Reduce(void* operand, void* result, int count, MPI_Datatype datatype, MPI_Op operator, int root, MPI_Comm comm);

  6. Reduce (cont) • Reduction operations: MPI_MAX Maximum MPI_MIN Minimum MPI_SUM Sum MPI_PROD Product MPI_LAND Logical and MPI_BAND Bitwise and MPI_LOR Logical or MPI_BOR Bitwise or MPI_LXOR Logical exclusive or MPI_BXOR Bitwise exclusive or MPI_MAXLOC Maximum and location of maximum MPI_MINLOC Minimum and location of minimum

  7. Examples for reduce and Broadcast

  8. Gather • Collects data from each process in a communicator. MPI_Gather(void* send_data, /* data to be sent */ int send_count, MPI_Datatype recv_type, void *recv_data, int recv_count, MPI_Datatype recv_type, int root, /* root process */ MPI_Comm comm); /*communicator*/

  9. Scatter • It splits the data on one process and distributes it to all the other processes. int MPI_Scatter(void *send_data, int send_count, MPI_Datatype send_type, void* recv_data, int recv_count, MPI_Datatype recv_type, int root, MPI_Comm communicator);

  10. Example of Gather and Scatter

More Related