Cs 684
1 / 26

CS 684 - PowerPoint PPT Presentation

  • Uploaded on

CS 684. Message Passing. Based on multi-processor Set of independent processors Connected via some communication net All communication between processes is done via a message sent from one to the other. MPI. Message Passing Interface Computation is made of: One or more processes

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'CS 684' - leena

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

Message passing l.jpg
Message Passing

  • Based on multi-processor

    • Set of independent processors

    • Connected via some communication net

  • All communication between processes is done via a message sent from one to the other

Slide3 l.jpg

  • Message Passing Interface

  • Computation is made of:

    • One or more processes

    • Communicate by calling library routines

  • MIMD programming model

  • SPMD most common.

Slide4 l.jpg

  • Processes use point-to-point communication operations

  • Collective communication operations are also available.

  • Communication can be modularized by the use of communicators.

    • MPI_COMM_WORLD is the base.

    • Used to identify subsets of processors

Slide5 l.jpg

  • Complex, but most problems can be solved using the 6 basic functions.

    • MPI_Init

    • MPI_Finalize

    • MPI_Comm_size

    • MPI_Comm_rank

    • MPI_Send

    • MPI_Recv

Mpi basics l.jpg
MPI Basics

  • Most all calls require a communicator handle as an argument.


  • MPI_Init and MPI_Finalize

    • don’t require a communicator handle

    • used to begin and end and MPI program

    • MUST be called to begin and end

Mpi basics7 l.jpg
MPI Basics

  • MPI_Comm_size

    • determines the number of processors in the communicator group

  • MPI_Comm_rank

    • determines the integer identifier assigned to the current process

    • zero based

Mpi basics8 l.jpg
MPI Basics

#include <stdio.h>

#include <mpi.h>

main(int argc, char *argv[])


int iproc, nproc;

MPI_Init(&argc, &argv);

MPI_Comm_size(MPI_COMM_WORLD, &nproc);

MPI_Comm_rank(MPI_COMM_WORLD, &iproc);

printf("I am processor %d of %d\n", iproc, nproc);



Mpi communication l.jpg
MPI Communication

  • MPI_Send

    • Sends an array of a given type

    • Requires a destination node, size, and type

  • MPI_Recv

    • Receives an array of a given type

    • Same requirements as MPI_Send

    • Extra parameter

      • MPI_Status variable.

Mpi basics11 l.jpg
MPI Basics

  • Made for both FORTRAN and C

  • Standards for C

    • MPI_ prefix to all calls

    • First letter of function name is capitalized

    • Returns MPI_SUCCESS or error code

    • MPI_Status structure

    • MPI data types for each C type

    • OUT parameters passed using & operator

Using mpi l.jpg
Using MPI

  • Based on rsh or ssh

    • requires a .rhosts file or ssh key setup

      • hostname login

  • Path to compiler (CS open labs)

    • MPI_HOME /users/faculty/snell/mpich

    • MPI_CC MPI_HOME/bin/mpicc

    • use mpcc on marylou10

    • Use mpicc on marylou4

    • Use cc prog.c -o prog -lmpi on marylou & marylou2

Using mpi13 l.jpg
Using MPI

  • Write program

  • Compile using mpicc or mpcc

  • Write process file (linux cluster)

    • host nprocs full_path_to_prog

    • 0 for nprocs on first line, 1 for all others

  • Run program (linux cluster)

    • prog -p4pg process_file args

    • mpirun –np #procs –machinefile machines prog

  • Run program (scheduled on marylou4 using pbs)

    • mpirun -np #procs -machinefile $PBS_NODEFILE prog

Example l.jpg

  • HINT benchmark

  • Found at /users/faculty/snell/CS584/HINT or ~qos/Hint

Slide15 l.jpg

#include “mpi.h”

#include <stdio.h>

#include <math.h>

#define MAXSIZE 1000

void main(int argc, char *argv)


int myid, numprocs;

int data[MAXSIZE], i, x, low, high, myresult, result;

char fn[255];

char *fp;




if (myid == 0)

{ /* Open input file and initialize data */



if ((fp = fopen(fn,”r”)) == NULL) {

printf(“Can’t open the input file: %s\n\n”, fn);



for(i = 0; i < MAXSIZE; i++) fscanf(fp,”%d”, &data[i]);


/* broadcast data */


/* Add my portion Of data */

x = n/nproc;

low = myid * x;

high = low + x;

for(i = low; i < high; i++)

myresult += data[i];

printf(“I got %d from %d\n”, myresult, myid);

/* Compute global sum */

MPI_Reduce(&myresult, &result, 1, MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD);

if (myid == 0) printf(“The sum is %d.\n”, result);



Slide16 l.jpg

  • Message Passing programs are non-deterministic because of concurrency

    • Consider 2 processes sending messages to third

  • MPI only guarantees that 2 messages sent from a single process to another will arrive in order.

  • It is the programmer's responsibility to ensure computation determinism

Mpi determinism l.jpg
MPI & Determinism

  • MPI

    • A Process may specify the source of the message

    • A Process may specify the type of message

  • Non-Determinism


Example18 l.jpg

for (n = 0; n < nproc/2; n++)


MPI_Send(buff, BSIZE, MPI_FLOAT, rnbor, 1,



1, MPI_COMM_WORLD, &status);

/* Process the data */


Global operations l.jpg
Global Operations

  • Coordinated communication involving multiple processes.

  • Can be implemented by the programmer using sends and receives

  • For convenience, MPI provides a suite of collective communication functions.

  • All participating processes must call the same function.

Collective communication l.jpg
Collective Communication

  • Barrier

    • Synchronize all processes

  • Broadcast

  • Gather

    • Gather data from all processes to one process

  • Scatter

  • Reduction

    • Global sums, products, etc.

Slide22 l.jpg


Problem Size


Input data


Boundary values

Find Max Error

Collect Results

Mpi reduce l.jpg

MPI_Reduce(inbuf, outbuf, count, type, op, root, comm)

Mpi allreduce l.jpg

MPI_Allreduce(inbuf, outbuf, count, type, op, comm)

Other mpi features l.jpg
Other MPI Features

  • Asynchronous Communication

    • MPI_ISend

    • MPI_Wait and MPI_Test

    • MPI_Probe and MPI_Get_count

  • Modularity

    • Communicator creation routines

  • Derived Datatypes