1 / 25

Introduction to MPI

Introduction to MPI. Nischint Rajmohan nischint@gatech.edu 5 November 2007. What can you expect ? Overview of MPI Basic MPI commands How to parallelize and execute a program using MPI & MPICH2 What is outside the scope ? Technical details of MPI MPI implementations other than MPICH

zenda
Download Presentation

Introduction to MPI

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to MPI Nischint Rajmohan nischint@gatech.edu 5 November 2007

  2. What can you expect ? • Overview of MPI • Basic MPI commands • How to parallelize and execute a program using MPI & MPICH2 What is outside the scope ? • Technical details of MPI • MPI implementations other than MPICH • Hardware specific optimization techniques

  3. Overview of MPI • MPI stands for Message Passing Interface • What is Message Passing Interface? • It is not a programming language or compiler specification • It is not a specific implementation or product • MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a library - but rather the specification of what such a library should be. • The specifications lets you create libraries that allow you to do problems in parallel using message passing to communicate between processes • It provides binding for widely used programming languages like Fortran, C/C++

  4. Background on MPI • Early vendor systems (Intel’s NX, IBM’s EUI, TMC’s CMMD) were not portable (or very capable) • Early portable systems (PVM, p4, TCGMSG, Chameleon) were mainly research efforts • Did not address the full spectrum of issues • Lacked vendor support • Were not implemented at the most efficient level • The MPI Forum organized in 1992 with broad participation by: • vendors: IBM, Intel, TMC, SGI, Convex, Meiko • portability library writers: PVM, p4 • users: application scientists and library writers • finished in 18 months • Library standard defined by a committee of vendors, implementers, and parallel programmers

  5. Reasons for using a MPI standard • Standardization - MPI is the only message passing library which can be considered a standard. It is supported on virtually all HPC platforms. Practically, it has replaced all previous message passing libraries. • Portability - There is no need to modify your source code when you port your application to a different platform that supports (and is compliant with) the MPI standard. • Performance Opportunities - Vendor implementations should be able to exploit native hardware features to optimize performance. • Functionality - Over 115 routines are defined in MPI-1 alone. • Availability - A variety of implementations are available, both vendor and public domain.

  6. 0 0 A A1 Send 1 1 A2 2 2 A3 3 3 A4 B1 B B2 Receive B3 B4 MPI Operation Communicator Processing

  7. MPI Programming Model

  8. MPI Library

  9. Environment Management Routines • MPI_Init • Initializes the MPI execution environment. This function must be called in every MPI program, must be called before any other MPI functions and must be called only once in an MPI program. For C programs, MPI_Init may be used to pass the command line arguments to all processes, although this is not required by the standard and is implementation dependent. C: MPI_Init (&argc,&argv) Fortran: MPI_INIT (ierr)

  10. Environment Management Routines contd. • MPI_Comm_Rank • Determines the rank of the calling process within the communicator. Initially, each process will be assigned a unique integer rank between 0 and number of processors - 1 within the communicator MPI_COMM_WORLD. This rank is often referred to as a task ID. If a process becomes associated with other communicators, it will have a unique rank within each of these as well. C: MPI_Comm_rank (comm,&rank)FORTRAN: MPI_COMM_RANK (comm,rank,ierr)

  11. Environment Management Routines contd. • MPI_Comm_size • Determines the number of processes in the group associated with a communicator. Generally used within the communicator MPI_COMM_WORLD to determine the number of processes being used by your application. C: MPI_Comm_size (comm,&size) Fortran: MPI_COMM_SIZE (comm,size,ierr) • MPI_Finalize • Terminates the MPI execution environment. This function should be the last MPI routine called in every MPI program - no other MPI routines may be called after it. C: MPI_Finalize ()Fortran: MPI_FINALIZE (ierr)

  12. MPI Sample Program: Environment Management Routines ! In Fortran program main ! /* the mpi include file */ include 'mpif.h' integer ierr, rank, size !/* Initialize MPI */ call MPI_INIT( ierr ) !/* How many processors are there?*/ call MPI_COMM_SIZE( MPI_COMM_WORLD, size, ierr ) !/* What processor am I (what is my rank)? */ call MPI_COMM_RANK( MPI_COMM_WORLD, rank, ierr ) print *, 'I am ', rank, ' of ', size call MPI_FINALIZE( ierr ) end ! In C ! /* the mpi include file */ #include "mpi.h" #include <stdio.h> int main( argc, argv ) int argc; char *argv[]; { int rank, size; !/* Initialize MPI */ MPI_Init( &argc, &argv ); !/* How many processors are there?*/ MPI_Comm_size( MPI_COMM_WORLD, &size ); !/* What processor am I (what is my rank)? */ MPI_Comm_rank( MPI_COMM_WORLD, &rank ); printf( "I am %d of %d\n", rank, size ); MPI_Finalize(); return 0; }

  13. Point to Point Communication Routines MPI_Send • Basic blocking send operation. Routine returns only after the application buffer in the sending task is free for reuse. Note that this routine may be implemented differently on different systems. The MPI standard permits the use of a system buffer but does not require it. C: MPI_Send (&buf,count,datatype,dest,tag,comm) Fortran:MPI_SEND(buf,count,datatype,dest,tag,comm,ierr)

  14. Point to Point Communication Routines contd. MPI_Recv • Receive a message and block until the requested data is available in the application buffer in the receiving task. C: MPI_Recv(&buf,count,datatype,source,tag,comm) Fortran:MPI_RECV(buf,count,datatype,source,tag,comm,ierr)

  15. MPI Sample Program: Send and Receive ! In Fortran program shifter implicit none include 'mpif.h' integer my_pe_num, errcode, numbertoreceive, numbertosend integer status(MPI_STATUS_SIZE) call MPI_INIT(errcode) call MPI_COMM_RANK(MPI_COMM_WORLD, my_pe_num, errcode) numbertosend = 42 if (my_PE_num.EQ.0) then call MPI_Recv( numbertoreceive, 1, MPI_INTEGER,MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, status, errcode) print *, 'Number received is:‘ ,numbertoreceive Endif if (my_PE_num.EQ.1) then call MPI_Send( numbertosend, 1,MPI_INTEGER, 0, 10, MPI_COMM_WORLD,errcode) endif call MPI_FINALIZE(errcode) end ! In C #include <stdio.h> #include "mpi.h" main(int argc, char** argv){ int my_PE_num, numbertoreceive, numbertosend=42; MPI_Status status; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &my_PE_num); if (my_PE_num==0){ MPI_Recv( &numbertoreceive, 1, MPI_INT, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status); printf("Number received is: %d\n", numbertoreceive); } else MPI_Send( &numbertosend, 1, MPI_INT, 0, 10, MPI_COMM_WORLD); MPI_Finalize(); }

  16. Collective Communication Routines • MPI_Barrier • Creates a barrier synchronization in a group. Each task, when reaching the MPI_Barrier call, blocks until all tasks in the group reach the same MPI_Barrier call. C:MPI_Barrier (comm) Fortran:MPI_BARRIER (comm,ierr) • MPI_Bcast • Broadcasts (sends) a message from the process with rank "root" to all other processes in the group. C: MPI_Bcast (&buffer,count,datatype,root,comm) Fortran: MPI_BCAST (buffer,count,datatype,root,comm,ierr)

  17. Process 0 Send(1) Recv(1) Process 1 Send(0) Recv(0) Sources of Deadlocks • Send a large message from process 0 to process 1 • If there is insufficient storage at the destination, the send must wait for the user to provide the memory space (through a receive) • What happens with this code? • This is called “unsafe” because it depends on the availability of system buffers

  18. MPICH – MPI Implementation • MPICH is a freely available, portable implementation of MPI • MPICH acts as the middleware between the MPI parallel library API and the hardware environment • MPICH build is available for Unix based systems and also as an installer for Windows. • MPICH2 is latest version of the implementation • http://www-unix.mcs.anl.gov/mpi/mpich/

  19. MPI Program Compilation (Unix) Fortran mpif90 –c hello_world.f mpif90 –o hello_world hello_world.o C mpicc -c hello_world.cc mpicc –o hello_world hello_world.o

  20. MPI Program Execution Fortran/C mpiexec –n 4 ./hello_world mpiexec is the command for execution in parallel environment used for specifying number of processors mpiexec -help This command should give all the options available to run mpi programs If you don’t have mpiexec installed on your system, use mpirun and use –np instead of -n

  21. MPI Program Execution contd. mpiexec –machinefile hosts –n 7 ./hello_world This flag allows you to specify a file containing the host name of the processors you want to use Sample hosts file : master master node2 node3 node3 node5 node6

  22. MPICH on Windows • Installing MPICH2 • Download the Win32-IA32 version of MPICH2 from: http://www-unix.mcs.anl.gov/mpi/mpich2/ • Run the executable, mpich2-1.0.3-1-win32-ia32.msi (or a more recent version). Most likely it will result in the following error: 3. To download version1.1 use this link: http://www.microsoft.com/downloads/details.aspx?FamilyId=262D25E3-F589-4842-8157-034D1E7CF3A3&displaylang=en

  23. MPICH on Windows contd. • Install the .NET Framework program • Install the MPICH2 executable. Write down the passphrase for future reference. The passphrase must be consistent across a network. • Add the MPICH2 path to Windows: • Right click “My Computer” and pick properties • Select the Advanced Tab • Select the Environment Variables button • Highlight the path variable under System Variables and click edit. Add “C:\MPICH2\bin” to the end of the list, make sure to separate this from the prior path with a semicolon. • Run the example executable to insure correct installation. • mpiexec –n 2 cpi.exe • If installed on a dual processor machine, verify that both processors are being utilized by examining “CPU Usage History” in the Windows Task Manager. • The first time each session mpiexec is run it will ask for username and password. To prevent being asked for this in the future, this information can be encrypted into the Windows registry by running: • mpiexec –register • The username and password are your Windows XP logon information.

  24. MPICH on Windows contd. • Compilation (Fortran) ifort /fpp /include:”C:MPICH2/INCLUDE” /names/uppercase /iface:cref /libs:static /threads /c hello_world.f • The above command will compile the parallel program and create a .obj file. ifort –o hello_world.exe hello_world.obj C:/MPICH2/LIB/cxx.lib C:/MPICH2/LIB/mpi.lib C:/MPICH2/LIB/fmpich2.lib C:/MPICH2/LIB/fmpich2s.lib C:/MPICH2/LIB/fmpich2g.lib • The above command will link the object file and create the executable. The executable is run in the same way as specified before using mpiexec command

  25. THE END • Useful Sources: http://www.llnl.gov/computing/tutorials/mpi/#LLNL http://www-unix.mcs.anl.gov/mpi/ • CS 6290 - High Performance Computing & Architecture • For more assistance, you can contact Nischint Rajmohan MK402 404-894-6301 nischint@gatech.edu

More Related