1 / 32

Introduction to Mpi

Introduction to Mpi. Yeni Herdiyeni ( http :// www.cs. ipb.ac.id/~yeni ). Topics. MPI Overview Process Model and Language Binding Message and Point to Point Communication. The Message-Passing Programming Paradigm. The Message-Passing Programming Paradigm. Data and Work Distribution.

dulcea
Download Presentation

Introduction to Mpi

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Mpi YeniHerdiyeni (http://www.cs.ipb.ac.id/~yeni)

  2. Topics • MPI Overview • Process Model and Language Binding • Message and Point to Point Communication

  3. The Message-Passing Programming Paradigm

  4. The Message-Passing Programming Paradigm

  5. Data and Work Distribution

  6. What is SPMD

  7. Message

  8. Message Structure envelope body source destination communicator tag buffer count datatype The Message • A message is an array of elements of some particular MPI data type • MPI Data types • Basic types • Derived types • Derived type can be build up from basic types • C types are different from Fortran types • Messages are identified by their envelopes, • a message could be received only if the receiver specify the correct envelope

  9. C - MPI Basic Datatypes

  10. MPI include file #include <mpi.h> void main (int argc, char *argv[]) { int np, rank, ierr; ierr = MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD,&rank); MPI_Comm_size(MPI_COMM_WORLD,&np); /* Do Some Works */ ierr = MPI_Finalize(); } #include <mpi.h> void main (int argc, char *argv[]) { int np, rank, ierr; ierr = MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD,&rank); MPI_Comm_size(MPI_COMM_WORLD,&np); /* Do Some Works */ ierr = MPI_Finalize(); } #include <mpi.h> void main (int argc, char *argv[]) { int np, rank, ierr; ierr = MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD,&rank); MPI_Comm_size(MPI_COMM_WORLD,&np); /* Do Some Works */ ierr = MPI_Finalize(); } variable declarations #include <mpi.h> void main (int argc, char *argv[]) { int np, rank, ierr; ierr = MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD,&rank); MPI_Comm_size(MPI_COMM_WORLD,&np); /* Do Some Works */ ierr = MPI_Finalize(); } #include <mpi.h> void main (int argc, char *argv[]) { int np, rank, ierr; ierr = MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD,&rank); MPI_Comm_size(MPI_COMM_WORLD,&np); /* Do Some Works */ ierr = MPI_Finalize(); } Initialize MPI environment Do work and make message passing calls Terminate MPI Environment General MPI Program Structure

  11. Access

  12. Addressing

  13. Reception • All messages must be received

  14. Point to Point Communication • Is the fundamental communication facility provided by MPI library • Is conceptually simple: A send a message to B, B receive the message from A. It is less simple in practice. • Communication take places within a communicator • Source and Destination are identified by their rank in the communicator

  15. Point to Point Communication

  16. Communication Modes and MPI Subroutines

  17. Standard Send and Receive • C: • int MPI_Send(void *buf, int count, MPI_Datatype type, int dest, int tag, MPI_Comm comm); • int MPI_Recv (void *buf, int count, MPI_Datatype type, int dest, int tag, MPI_Comm comm, MPI_Status *status);

  18. In this program, process 0 sends a message to process 1, and process 1 receives it. Note the use of myrank in a conditional to limit execution of code to a particular process. /* simple send and receive */ #include <stdio.h> #include <mpi.h> void main (int argc, char **argv) { int myrank; MPI_Status status; double a[100]; MPI_Init(&argc, &argv); /* Initialize MPI */ MPI_Comm_rank(MPI_COMM_WORLD, &myrank); /* Get rank */ if( myrank == 0 ) /* Send a message */ MPI_Send( a, 100, MPI_DOUBLE, 1, 17, MPI_COMM_WORLD ); else if( myrank == 1 ) /* Receive a message */ MPI_Recv( a, 100, MPI_DOUBLE, 0, 17, MPI_COMM_WORLD, &status ); MPI_Finalize(); /* Terminate MPI */ } Example: Send and Receive

  19. Sending and Receiving, an example #include <stdio.h> #include <mpi.h> void main (intargc, char * argv[]) { int err, nproc, myid; MPI_Status status; float a[2]; err = MPI_Init(&argc, &argv); err = MPI_Comm_size(MPI_COMM_WORLD, &nproc); err = MPI_Comm_rank(MPI_COMM_WORLD, &myid); if( myid == 0 ) { a[0] = 3.0, a[1] = 5.0; MPI_Send(a, 2, MPI_FLOAT, 1, 10, MPI_COMM_WORLD); } else if( myid == 1 ) { MPI_Recv(a, 2, MPI_FLOAT, 0, 10, MPI_COMM_WORLD, &status); printf(”%d: a[0]=%f a[1]=%f\n”, myid, a[0], a[1]); } err = MPI_Finalize(); }

  20. Again about completion • Standard MPI_RECV and MPI_SEND block the calling process until completion. • For MPI_RECV completion: the message is arrived and the process could proceed using the received data. • For MPI_SEND completion: the process could proceed and data could be overwritten without interfering with the message. But this does not mean that the message has already been sent. In many MPI implementation, depending on the message size, sending data are copied to MPI internal buffers. • If the message is not buffered a call to MPI_SEND implies a process synchronization, on the contrary this is not true if the message is buffered. • Don’t make any assumptions (implementation dependent)

  21. Synchronous Send

  22. Buffered = Asynchronous Sends

  23. Definitions (Blocking and non-Blocking) • “Completion” of the communication means that memory locations used in the message transfer can be safely accessed • Send: variable sent can be reused after completion • Receive: variable received can now be used • MPI communication modes differ in what conditions are needed for completion • Communication modes can be blocking or non-blocking • Blocking: return from routine implies completion • Non-blocking: routine returns immediately, user must test for completion

  24. Blocking Operation

  25. Non-Blocking Operation

  26. Non-Blocking Operation

  27. init init compute compute Proceed if 1 has taken action B Proceed if 0 has taken action A Action A Action B terminate terminate DEADLOCK Deadlock occurs when 2 (or more) processes are blocked and each is waiting for the other to make progress. 0 1

  28. /* simple deadlock */ #include <stdio.h> #include <mpi.h> void main (int argc, char **argv) { int myrank; MPI_Status status; double a[100], b[100]; MPI_Init(&argc, &argv); /* Initialize MPI */ MPI_Comm_rank(MPI_COMM_WORLD, &myrank); /* Get rank */ if( myrank == 0 ) { /* Receive, then send a message */ MPI_Recv( b, 100, MPI_DOUBLE, 1, 19, MPI_COMM_WORLD, &status ); MPI_Send( a, 100, MPI_DOUBLE, 1, 17, MPI_COMM_WORLD ); } else if( myrank == 1 ) { /* Receive, then send a message */ MPI_Recv( b, 100, MPI_DOUBLE, 0, 17, MPI_COMM_WORLD, &status ); MPI_Send( a, 100, MPI_DOUBLE, 0, 19, MPI_COMM_WORLD ); } MPI_Finalize(); /* Terminate MPI */ } Deadlock

  29. init init compute compute Proceed if 1 has taken action B Action B Proceed if 0 has taken action A Action A terminate terminate Avoiding DEADLOCK 1 0

  30. /* safe exchange */ #include <stdio.h> #include <mpi.h> void main (int argc, char **argv) { int myrank; MPI_Status status; double a[100], b[100]; MPI_Init(&argc, &argv); /* Initialize MPI */ MPI_Comm_rank(MPI_COMM_WORLD, &myrank); /* Get rank */ if( myrank == 0 ) { /* Receive a message, then send one */ MPI_Recv( b, 100, MPI_DOUBLE, 1, 19, MPI_COMM_WORLD, &status ); MPI_Send( a, 100, MPI_DOUBLE, 1, 17, MPI_COMM_WORLD ); } else if( myrank == 1 ) { /* Send a message, then receive one */ MPI_Send( a, 100, MPI_DOUBLE, 0, 19, MPI_COMM_WORLD ); MPI_Recv( b, 100, MPI_DOUBLE, 0, 17, MPI_COMM_WORLD, &status ); } MPI_Finalize(); /* Terminate MPI */ } Avoiding Deadlock

  31. Communicator MPI_COMM_WORLD

More Related