1 / 15

Distributed-Memory (Message-Passing) Paradigm

Distributed-Memory (Message-Passing) Paradigm. FDI 2007 Track Q Day 1 – Afternoon Session. Characteristics of Distributed-Memory Machines. Scalable interconnection network. Some memory physically local, some remote Message-passing programming model.

rowejane
Download Presentation

Distributed-Memory (Message-Passing) Paradigm

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Distributed-Memory (Message-Passing) Paradigm FDI 2007 Track Q Day 1 – Afternoon Session

  2. Characteristics of Distributed-Memory Machines • Scalable interconnection network. • Some memory physically local, some remote • Message-passing programming model. • Major issues: network latency and bandwidth, message-passing overhead, data distribution. • Incremental parallelization and debugging can be difficult.

  3. What is MPI? • De facto standard API for explicit message-passing MIMD/SPMD programming. • Many implementations over many networks • Developed in mid 90’s by consortium, reflecting lessons learned from machine-specific libraries and PVM. • Focused on: homogeneous MPPs, high performance, library writing, portability. • For more information: people.cs.vt.edu/~ribbens/fdi

  4. Six-Function MPI • MPI_Init Initialize MPI • MPI_Finalize Close it down • MPI_Comm_rank Get my process # • MPI_Comm_size How many total? • MPI_Send Send message • MPI_Recv Receive message

  5. MPI “Hello World” in Fortran77 implicit none include 'mpif.h' integer myid, numprocs, ierr call mpi_init( ierr ) call mpi_comm_rank( mpi_comm_world, myid, ierr ) call mpi_comm_size( mpi_comm_world, numprocs, ierr ) print *, "hello from ", myid, " of ", numprocs call mpi_finalize( ierr ) stop end

  6. MPI “Hello World” in C #include "mpi.h" #include <stdio.h> #include <stdlib.h> int main(int argc, char *argv[]) { int myid, numprocs, namelen; char processor_name[MPI_MAX_PROCESSOR_NAME]; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_rank(MPI_COMM_WORLD,&myid); MPI_Get_processor_name(processor_name, &namelen); printf("hello from %s: process %d of %d\n", processor_name, myid, numprocs); MPI_Finalize(); }

  7. Function MPI_Send MPI_SEND(buf, count, datatype, dest, tag, comm) IN buf initial address of send buffer (choice) IN count number of entries to send (integer) IN datatype datatype of each entry (handle) IN dest rank of destination (integer) IN tag message tag (integer) IN comm communicator (handle) int MPI_Send (void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) MPI_SEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, IERR) <type> BUF(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, IERR

  8. Function MPI_Recv MPI_RECV(buf, count, datatype, source, tag, comm, status) OUT buf initial address of receive buffer (choice) IN count max number of entries to receive (integer) IN datatype datatype of each entry (handle) IN dest rank of source (integer) IN tag message tag (integer) IN comm communicator (handle) OUT status return status (Status) int MPI_Recv (void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status) MPI_RECV(BUF, COUNT, DATATYPE, SOURCE, TAG, COMM, STATUS, IERR) <type> BUF(*) INTEGER COUNT, DATATYPE, SOURCE, TAG, COMM INTEGER STATUS(MPI_STATUS_SIZE), IERR

  9. MPI Send and Recv Semantics • These are “standard mode”, blocking calls. • Send returns when buf may be re-used; Recv returns when data in buf is available. • Count consecutive items of type datatype, beginning at buf, are sent to process with rank dest. • Tag can be used to distinguish among messages from the same source. • Messages are non-overtaking. • Buffering policies depend on implementation.

  10. MPI Communicators • A communicator can be thought of as a set of processes; every communication event takes place within a particular communicator. • MPI_COMM_WORLD is the initial set of processes (note the static process model). • Why do communicators exist? • Collective operations over subsets of processes • Can define special topologies for sets of processes • Separate communication contexts for libraries.

  11. MPI Collective Operations • An operation over an entire communicator • Must be called by every member of the communicator • Three classes of collective operations: • Synchronization (MPI_Barrier) • Data movement • Collective computation

  12. Collective Patterns(Gropp)

  13. Collective Computation Patterns(Gropp)

  14. Collective Routines (Gropp) • Many routines: Allgather Allgatherv Allreduce Alltoall Alltoallv Bcast Gather Gatherv Reduce ReduceScatter Scan Scatter Scatterv • All versions deliver results to all participating processes. • V versions allow the chunks to have different sizes. • Allreduce, Reduce, ReduceScatter, and Scan take both built-in and user-defined combination functions.

  15. MPI topics not covered … • Topologies: Cartesian and graph • User-defined types • Message-passing modes • Intercommunicators • MPI-2 topics: MPI I/O, remote memory access, dynamic process management

More Related