1 / 14

ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems

ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems. Lecture 9 October 30, 2002 Nayda G. Santiago. Announcement. Daniel Burbano Projects Attendance List Registrar’s office We will go to the lab. Overview. MPI basic functions Reference

rafiki
Download Presentation

ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 9 October 30, 2002 Nayda G. Santiago

  2. Announcement • Daniel Burbano • Projects • Attendance List • Registrar’s office • We will go to the lab

  3. Overview • MPI basic functions • Reference • MPICH home page • Jack Dongarra’s homepage

  4. Getting started with MPI • MPI contains 125 routines (more with extensions!) • Many programs can be written with only six (6) MPI routines • Upon startup, all processes can be identified by their rank, which goes from 0 to N-1, where there are N processes

  5. MPI – Basic functions • These six functions allow you to write many programs • MPI_init - Initialize MPI • MPI_finalize - Terminate MPI • MPI_comm_size – How many processes are running? • MPI_comm_rank – What is my process number? • MPI_send – Send a message • MPI_recv – Receive a message

  6. Basic MPI: MPI_INIT • MPI_INIT must be the first MPI routine called in any program • MPI_INIT(ierr) • Ierr: integer error return value. • 0: success • Non-zero: failure • Can only be called once • Sets up the environment to enable message passing

  7. Basic MPI: MPI_FINALIZE • MPI_FINALIZE must be called by each process before it exits • MPI_FINALIZE(ierr) • Ierr: integer error return value. • 0: success • Non-zero: failure • No other MPI routine can be called after MPI_FINALIZE • All pending communication must be completed before calling MPI_FINALIZE

  8. MPI Basic Program Structure program main include ‘mpi.h’ integer ierr call MPI_INIT(ierr) ----- do some work ----- call MPI_FINALIZE(ierr) maybe do some additional local computation ------ end #include “mpi.h” int main() { MPI_init() ----- do some work ----- MPI_finalize() maybe do some additional local computation ---- }

  9. Groups and Communicators • We will not be using this, but it is important so that you understand the routines • Groups can be thought of as sets of processes • These groups are associated with what are called “communicators” • Upon startup, there is a single set of processes associated with the communicator MPI_COMM_WORLD • Groups can be created which are sub-sets of this original group, also associated with communicators

  10. MPI_COMM_RANK(comm, rank, ierr) • comm: Integer communicator. We will always use MPI_COMM_WORLD • rank: Returned rank of calling process • ierr: Integer error return code • This routine returns the relative rank of the calling process, within the group associated with comm.

  11. MPI_COMM_SIZE(comm, size, ierr) • comm: Integer communicator identifier • size: Upon return, the number of processes in the group associated with comm. For our purposes, always the total number of processes • This routine returns the number of processes in the group associated with comm

  12. A very simple program: Hello World program main include ‘mpi.h’ integer ierr, size, rank call MPI_INIT(ierr) call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr) call MPI_COMM_SIZE(MPI_COMM_WORLD, size, ierr) print *, ‘Hello world from process’, rank, ‘of’, size call MPI_FINALIZE(ierr) end

  13. Hello World • Mpirun –np 4 a.out • Hello World from 2 of 4 • Hello World from 0 of 4 • Hello World from 1 of 4 • Hello World from 3 of 4 • Mpirun –np 4 a.out • Hello World from 3 of 4 • Hello World from 1 of 4 • Hello World from 2 of 4 • Hello World from 0 of 4

  14. Progress Report • Report due next week: Nov. 6, 2002 before midnight, by email • Format: PDF, PS, DOC • Follow ‘Writing Formal Reports: An Approach for Engineering Students in 21st Century, 3rd Edition’ • Contents: • Title page • Abstract – Informative abstract • Table of contents • Introduction • Discussion • Time schedule and what have you completed so far. • Future work • Details, what remains to be done • References

More Related