1 / 26

An Introduction to MPI (message passing interface)

An Introduction to MPI (message passing interface). Organization. In general, grid apps can be organized as: Peer-to-peer Manager-worker (one manager-many workers) We will focus on master-worker. Concepts. MPI size = # of processes in grid app MPI rank

maisie
Download Presentation

An Introduction to MPI (message passing interface)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An Introduction to MPI (message passing interface)

  2. Organization • In general, grid apps can be organized as: • Peer-to-peer • Manager-worker (one manager-many workers) • We will focus on master-worker.

  3. Concepts • MPI size = # of processes in grid app • MPI rank • Individual process number in executing grid app • 0..size-1 • In manager-worker framework, • let manager rank = 0 • and workers ranks be 1..size-1 • Each individual process can determine its rank.

  4. More concepts • Blocking vs. nonblocking • Blocking = calling process waits (blocks) until this operation completes • Nonblock = calling process does not wait (block). The calling process initiates the operation but does not wait for completion.

  5. Compiling MPI grid apps (on scott) • Don’t use g++ directly! • Use: ~ggrevera/lammpi/bin/mpic++ Ex. mpic++ -g -o mpiExample2.exe mpiExample2.cpp mpic++ -O3 -o mpiExample2.exe mpiExample2.cpp

  6. Starting, running, and stopping grid apps • Before we can run our grid apps, we must first start lam mpi. Enter the command: • lamboot -v • An optional lamhosts file may be specified to indicate the host computers (along with CPU configurations) that participate in the grid. • To run our grid app (called mpiExample1.exe), use: • mpirun -np 4 ./mpiExample1.exe • This creates and runs a 4 process grid app. • When you are finished, stop lam mpi via: • lamhalt

  7. Getting started #include <mpi.h> //do this once for mpi definitions int MPI_Init ( int *pargc, char ***pargv ); INPUT PARAMETERS pargc - Pointer to the number of arguments pargv - Pointer to the argument vector

  8. Finish up int MPI_Finalize ( void );

  9. Other useful MPI functions int MPI_Comm_rank ( MPI_Comm comm, int *rank ); INPUT PARAMETERS comm - communicator (handle) OUTPUT PARAMETER rank - rank of the calling process in group of comm (integer)

  10. Other useful MPI functions int MPI_Comm_size ( MPI_Comm comm, int *psize ); INPUT PARAMETER comm - communicator (handle - must be intracommunicator) OUTPUT PARAMETER psize - number of processes in the group of comm (integer)

  11. Other useful non MPI functions #include <unistd.h> int gethostname ( char *name, size_t len );

  12. Other useful non MPI functions #include <sys/types.h> #include <unistd.h> pid_t getpid ( void );

  13. Example 1 • This program is a skeleton of a parallel MPI application using the one manager/many workers framework. • http://www.sju.edu/~ggrevera/software/csc4035/mpiExample1.cpp

  14. Example 1 /** \file mpiExample1.cpp \brief MPI programming example #1. \author george j. grevera, ph.d. This program is a skeleton of a parallel MPI application using the one manager/many workers framework. <pre> compile: mpic++ -g -o mpiExample1.exe mpiExample1.cpp # debug version mpic++ -O3 -o mpiExample1.exe mpiExample1.cpp # optimized version run : lamboot -v # to start lam mpi mpirun -np 4 ./mpiExample1.exe # run in parallel w/ 4 processes lamhalt @ to stop lam mpi </pre> */ #include <assert.h> #include <mpi.h> #include <stdio.h> #include <unistd.h>

  15. Example 1 static char mpiName[ 1024 ]; ///< host computer name static int mpiRank; ///< number of this process (0..n-1) static int mpiSize; ///< total number of processes (n) static int myPID; ///< process id //----------------------------------------------------------------------

  16. Example 1 //---------------------------------------------------------------------- /** \brief main program entry point for example 1. execution begins here. \param argc count of command line arguments. \param argv array of command line arguments. \returns 0 is always returned. */ int main ( int argc, char* argv[] ) { //not const because MPI_Init may change if (MPI_Init( &argc, &argv ) != MPI_SUCCESS) { //actually, we'll never get here but it is a good idea to check. // if MPI_Init fails, mpi will exit with an error message. puts( "mpi init failed." ); return 0; } //get the name of this computer gethostname( mpiName, sizeof( mpiName ) ); //determine rank MPI_Comm_rank( MPI_COMM_WORLD, &mpiRank ); //determine the total number of processes MPI_Comm_size( MPI_COMM_WORLD, &mpiSize ); //get the process id myPID = getpid();

  17. Example 1 printf( "mpi initialized. my rank=%d, size=%d, pid=%d. \n", mpiRank, mpiSize, myPID ); if (mpiSize<2) { puts("this example requires at least 1 manager and 1 worker process."); MPI_Finalize(); return 0; } if (mpiRank==0) manager(); else worker(); MPI_Finalize(); return 0; } //----------------------------------------------------------------------

  18. Example 1 //---------------------------------------------------------------------- /** \brief manager code for example 1 */ static void manager ( void ) { printf( "manager: my rank=%d, size=%d, pid=%d. \n", mpiRank, mpiSize, myPID ); /** \todo insert manager code here. */ } //----------------------------------------------------------------------

  19. Example 1 //---------------------------------------------------------------------- /** \brief worker code for example 1 */ static void worker ( void ) { printf( "worker: my rank=%d, size=%d, pid=%d. \n", mpiRank, mpiSize, myPID ); /** \todo insert worker code here. */ } //----------------------------------------------------------------------

  20. More useful MPI functions int MPI_Send ( void *buf, int count, MPI_Datatype dtype, int dest, int tag, MPI_Comm comm ); INPUT PARAMETERS buf - initial address of send buffer (choice) count - number of elements in send buffer (nonnegative integer) dtyp - datatype of each send buffer element (handle) dest - rank of destination (integer) tag - message tag (integer) comm - communicator (handle)

  21. More useful MPI functions int MPI_Recv ( void *buf, int count, MPI_Datatype dtype, int src, int tag, MPI_Comm comm, MPI_Status *stat ); INPUT PARAMETERS count - maximum number of elements in receive buffer (integer) dtype - datatype of each receive buffer element (handle) src - rank of source (integer) tag - message tag (integer) comm - communicator (handle) OUTPUT PARAMETERS buf - initial address of receive buffer (choice) stat - status object (Status), which can be the MPI constant MPI_STATUS_IGNORE if the return status is not desired

  22. Defining messages struct Message { enum { OP_WORK, ///< manager to worker - here's your work assignment OP_EXIT, ///< manager to worker - time to exit OP_RESULT ///< worker to manager - here's the result }; int operation; ///< one of the above /** \todo define operation specific parameters here. */ }; C enums assign successive integers to the given constants/symbols. C structs are like Java or C++ objects with only the data members and without the methods/functions.

  23. Example 2 • This program is a skeleton of a parallel MPI application using the one manager/many workers framework. The process with an MPI rank of 0 is considered to be the manager; processes with MPI ranks of 1..mpiSize-1 are workers. Messages are defined and are sent from the manager to the workers. • http://www.sju.edu/~ggrevera/software/csc4035/mpiExample2.cpp

  24. Example 2 //---------------------------------------------------------------------- /** \brief manager code for example 2. */ static void manager ( void ) { printf( "manager: my rank=%d, size=%d, pid=%d. \n", mpiRank, mpiSize, myPID ); /** \todo insert manager code here. */ //as an example, send an empty work message to each worker struct Message m; m.operation = m.OP_WORK; assert( mpiSize>3 ); MPI_Send( &m, sizeof( m ), MPI_UNSIGNED_CHAR, 1, m.operation, MPI_COMM_WORLD ); MPI_Send( &m, sizeof( m ), MPI_UNSIGNED_CHAR, 2, m.operation, MPI_COMM_WORLD ); MPI_Send( &m, sizeof( m ), MPI_UNSIGNED_CHAR, 3, m.operation, MPI_COMM_WORLD ); } //----------------------------------------------------------------------

  25. Example 2 //---------------------------------------------------------------------- /** \brief worker code for example 2. */ static void worker ( void ) { printf( "worker: my rank=%d, size=%d, pid=%d. \n", mpiRank, mpiSize, myPID ); /** \todo insert worker code here. */ //as an example, receive a message MPI_Status status; struct Message m; MPI_Recv( &m, sizeof( m ), MPI_UNSIGNED_CHAR, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status ); printf( "worker %d (%d): received message. \n", mpiRank, myPID ); } //----------------------------------------------------------------------

  26. More useful MPI functions MPI_Barrier - Blocks until all process have reached this routine. int MPI_Barrier ( MPI_Comm comm ); INPUT PARAMETERS comm - communicator (handle)

More Related