Parallel computing introduction to message passing interface mpi
Download
1 / 39

Parallel Computing—Introduction to Message Passing Interface (MPI) - PowerPoint PPT Presentation


  • 195 Views
  • Uploaded on

Parallel Computing—Introduction to Message Passing Interface (MPI). Two Important Concepts. Two fundamental concepts of parallel programming are: Domain decomposition Functional decomposition. Domain Decomposition. Functional Decomposition. Message Passing Interface (MPI).

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Parallel Computing—Introduction to Message Passing Interface (MPI)' - anakin


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

Two important concepts
Two Important Concepts Interface (MPI)

  • Two fundamental concepts of parallel programming are:

    • Domain decomposition

    • Functional decomposition


Domain decomposition
Domain Decomposition Interface (MPI)


Functional decomposition
Functional Decomposition Interface (MPI)


Message passing interface mpi
Message Passing Interface (MPI) Interface (MPI)

  • MPI is a standard (an interface or an API):

    • It defines a set of methods that are used by application developers to write their applications

    • MPI library implement these methods

    • MPI itself is not a library—it is a specification document that is followed!

    • MPI-1.2 is the most popular specification version

  • Reasons for popularity:

    • Software and hardware vendors were involved

    • Significant contribution from academia

    • MPICH served as an early reference implementation

    • MPI compilers are simply wrappers to widely used C and Fortran compilers

  • History:

    • The first draft specification was produced in 1993

    • MPI-2.0, introduced in 1999, adds many new features to MPI

    • Bindings available to C, C++, and Fortran

  • MPI is a success story:

    • It is the mostly adopted programming paradigm of IBM Blue Gene systems

  • At least two production-quality MPI libraries:

    • MPICH2 (http://www-unix.mcs.anl.gov/mpi/mpich2/)

    • OpenMPI (http://open-mpi.org)

  • There’s even a Java library:

    • MPJ Express (http://mpj-express.org)


Message passing model
Message Passing Model Interface (MPI)

  • Message passing model allows processors to communicate by passing messages:

    • Processors do not share memory

  • Data transfer between processors required cooperative operations to be performed by each processor:

    • One processor sends the message while other receives the message


Distributed memory cluster

Memory Interface (MPI)

CPU

Distributed Memory Cluster

Proc 1

Proc 2

Proc 0

message

LAN

Ethernet

Myrinet

Infiniband etc

Proc 3

Proc 7

Proc 6

Proc 4

Proc 5


Writing hello world mpi program
Writing “Hello World” MPI Program Interface (MPI)

  • MPI is very simple:

    • Initialize MPI environment:

      • MPI_Init(&argc,&argv); // C Code

      • MPI.Init(args); // Java Code

    • Send or receive message:

      • MPI_Send(..); // C Code

      • MPI.COMM_WORLD.Send(); // Java Code

    • Finalize MPI environment

      • MPI_Finalize(); // C Code

      • MPI.Finalize(); // Java Code


Hello world in c
Hello World in C Interface (MPI)

#include <stdio.h>

#include <string.h>

#include “mpi.h”

..

// Initialize MPI

MPI_Init(&argsc,&&argsv);

// Find out the `id’ or `rank’ of current process

MPI_Comm_Rank(MPI_COMM_WORLD,&my_rank); //get the rank

// Get total number of processes

MPI_Comm_Size(MPI_COMM_WORLD,&p); //get total processor

// Print the rank of the process

printf(“Hello World from process no %d”,my_rank);

MPI_Finalize();

..


Hello world in java
Hello World in Java Interface (MPI)

import java.util.*;

import mpi.*;

..

// Initialize MPI

MPI.Init(args); // start up MPI

// Get total number of processes and rank

size = MPI.COMM_WORLD.Size();

rank = MPI.COMM_WORLD.Rank();

System.out.println(“Hello World <”+rank+”>”);

MPI_Finalize();

..


After initialization
After Initialization Interface (MPI)

import java.util.*;

import mpi.*;

..

// Initialize MPI

MPI.Init(args); // start up MPI

// Get total number of processes and rank

size = MPI.COMM_WORLD.Size();

rank = MPI.COMM_WORLD.Rank();

..


What is size
What is size? Interface (MPI)

import java.util.*;

import mpi.*;

..

// Get total number of processes

size = MPI.COMM_WORLD.Size();

..

  • Total number of processes in a communicator:

    • The size of MPI.COMM_WORLD is 6


What is rank
What is rank? Interface (MPI)

import java.util.*;

import mpi.*;

..

// Get total number of processes

rank = MPI.COMM_WORLD.Rank();

..

  • The “unique” identify (id) of a process in a communicator:

    • Each of the six processes in MPI.COMM_WORLD has a distinct rank or id


Running helloworld in c
Running “HelloWorld” in C Interface (MPI)

  • Write parallel code

  • Start MPICH2 daemon

  • Write machines file

  • Start the parallel job


Running hello world in java
Running “Hello World” in Java Interface (MPI)

  • The code is executed on a cluster called “Starbug”:

    • One head-node “holly” and eight compute-nodes

  • Steps:

    • Write machines files

    • Bootstrap MPJ Express (or any MPI library) runtime

    • Write parallel application

    • Compile and execute


Write machines files
Write machines files Interface (MPI)



Write parallel program
Write Parallel Program Interface (MPI)


Compile and execute
Compile and Execute Interface (MPI)


Single program multiple data spmd model
Single Program Multiple Data (SPMD) Model Interface (MPI)

import java.util.*;

import mpi.*;

public class HelloWorld {

MPI.Init(args); // start up MPI

size = MPI.COMM_WORLD.Size();

rank = MPI.COMM_WORLD.Rank();

if (rank == 0) {

System.out.println(“I am Process 0”);

}

else if (rank == 1) {

System.out.println(“I am Process 1”);

}

MPI.Finalize();

}


Single program multiple data spmd model1
Single Program Multiple Data (SPMD) Model Interface (MPI)

import java.util.*;

import mpi.*;

public class HelloWorld {

MPI.Init(args); // start up MPI

size = MPI.COMM_WORLD.Size();

rank = MPI.COMM_WORLD.Rank();

if (rank%2 == 0) {

System.out.println(“I am an even process”);

}

else if (rank%2 == 1) {

System.out.println(“I am an odd process”);

}

MPI.Finalize();

}


Point to point communication
Point to Point Communication Interface (MPI)

  • The most fundamental facility provided by MPI

  • Basically “exchange messages between two processes”:

    • One process (source) sends message

    • The other process (destination) receives message


Point to point communication1
Point to Point Communication Interface (MPI)

  • It is possible to send message for each basic datatype:

    • Floats, Integers, Doubles …

  • Each message contains a “tag”—an identifier

Tag1

Tag2


Point to point communication2

Integers Process 4 Tag COMM_WORLD Interface (MPI)

Point to Point Communication

Process 1

Process 2

Process 0

message

Process 3

Process 7

Process 6

Process 4

Process 5


Blocking and non blocking
Blocking and Non-blocking Interface (MPI)

  • There are blocking and non-blocking version of send and receive methods

  • Blocking versions:

    • A process calls send() or recv(), these methods return when the message has been physically sent or received

  • Non-blocking versions:

    • A process calls isend() or irecv(), these methods return immediately

    • The user can check the status of message by calling test() or wait()

  • Note the “i” in isend() and irecv()

  • Non-blocking versions provide overlapping of computation and communication:

    • It also depends on the “quality” of the implementation


“Blocking” Interface (MPI)

Sender

Receiver

send()

recv()

CPU waits

CPU waits

time

“Non Blocking”

Sender

Receiver

isend()

irecv()

CPU

perform task

CPU

perform task

time

iwait()

iwait()

CPU waits

CPU waits


Modes of send
Modes of Send Interface (MPI)

  • The MPI standard defines four modes of send:

    • Standard

    • Synchronous

    • Buffered

    • Ready




Performance evaluation of point to point communication
Performance Evaluation of Point to Point Communication messages)

  • Normally ping pong benchmarks are used to calculate:

    • Latency: How long it takes to send N bytes from sender to receiver?

    • Throughput: How much bandwidth is achieved?

  • Latency is a useful measure for studying the performance of “small” messages

  • Throughput is a useful measure for studying the performance of “large” messages








ad