1 / 6

PP Lab

PP Lab. MPI programming VI. Program 1. Write a parallel program that calculates the sum of all numbers in a vector. Calculate the partial sums . Add the partial sums for the final answer. Functions to be used. MPI_Scatter: Sends data from one task to all other tasks in a group. Synopsis:

chakra
Download Presentation

PP Lab

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PP Lab MPI programming VI

  2. Program 1 Write a parallel program that calculates the sum of all numbers in a vector. • Calculate the partial sums. • Add the partial sums for the final answer.

  3. Functions to be used • MPI_Scatter: Sends data from one task to all other tasks in a group. • Synopsis: #include "mpi.h" int MPI_Scatter ( void *sendbuf, int sendcnt, MPI_Datatype sendtype, void *recvbuf, int recvcnt, MPI_Datatype recvtype, int root, MPI_Comm comm ) • Input Parameters: • sendbuf: address of send buffer • sendcount: number of elements sent to each process • sendtype: data type of send buffer elements the above three arguments are significant only at root. • recvcount: number of elements in receive buffer • recvtype: data type of receive buffer elements • root: rank of sending process • comm: communicator • Output Parameter: • recvbuf: address of receive buffer

  4. Functions to be used • MPI_Gather: Gathers together values from a group of processes. • Synopsis: #include "mpi.h" int MPI_Gather ( void *sendbuf, int sendcnt, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm ) • Input Parameters: • sendbuf: starting address of send buffer • sendcount: number of elements in send buffer • sendtype: data type of send buffer elements • recvcount: number of elements for any single receive (significant only at root) • recvtype: data type of recv buffer elements (significant only at root) • root: rank of receiving process • comm: communicator • Output Parameter: • recvbuf: address of receive buffer (significant only at root)

  5. Code #include <mpi.h> #include <stdio.h> main (int argc,char *argv[]){ int i, rank, a[15], b[3], psum, c[5], tsum; MPI_Init (&argc,&argv); MPI_Comm_rank (MPI_COMM_WORLD,&rank); if(rank==0) for(i=0;i<15;i++){ printf(“#%d:Enter number: ”,i); scanf(“%d”,&a[i]);} MPI_Scatter (a,3,MPI_INT,b,3,MPI_INT,0,MPI_COMM_WORLD); psum=0; for(i=0;i<3;i++) psum+=b[i]; MPI_Gather (&psum,1,MPI_INT,c,1,MPI_INT,0,MPI_COMM_WORLD); if(rank==0){ tsum=0; for(i=0;i<5;i++) tsum+=c[i]; printf (“sum of the vector is %d\n”,tsum);} MPI_Finalize();}

  6. Assignment • Write and explain the argument lists of the following and say how they are different from the two functions you have seen: • MPI_Gatherv • MPI_Scatterv • MPI_Allgather • MPI_Allgatherv • MPI_Alltoall • MPI_Alltoallv

More Related