1 / 9

MPI Part 1

MPI Part 1. Introduction to MPI Commands. Basics – Send and Receive. MPI is a message passing environment. The processors’ method of sharing information is NOT via shared memory, but by processors sending messages to each other

Download Presentation

MPI Part 1

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MPI Part 1 Introduction to MPI Commands

  2. Basics – Send and Receive • MPI is a message passing environment. The processors’ method of sharing information is NOT via shared memory, but by processors sending messages to each other • This is done via a send-receive pairing. The originating processor can send anytime it wants to, but the destination processor has to do a receive before it gets to the destination

  3. Send Function - Form • void MPI::COMM_WORLD.Send(buf, count, datatype, dest, tag) • buf – the name of the variable to be sent • Count – how many to send • Datatype – the type of what is being sent • Dest – where to send it • Tag – message type • COMM_WORLD – communicator – info about the parallel system

  4. Send Arguments Discussion • buf – the address of the information to send – can be any data type. • datatype – must be a data type defined in MPI (ex. MPI::INT, MP::FLOAT, MPI::DOUBLE). The user can create data types and “register” them with MPI (later). • Count – how many values of type datatype are to be sent starting from the address buf

  5. Send Args Discussion (cont.) • Destination – which process to send the message to. Type – int • Tag – indicator about what kind of message is being sent. Programmer determined. Allows a process to send a variety of types of messages. Type - int • COMM_WORLD – communicator – information about the parallel system configuration to map destination (int) to a particular processor. There will be ways to change and/or create new communicators (later).

  6. More Discussion and Notes • It is more efficient to send a few big blocks of data than it is to send many small blocks of data (message sending overhead). • MPI uses an MPI defined data type so that communication between heterogeneous machines is possible. • Data being sent should be declared with an MPI defined type • MPI has MANY constants to indicate certain values (for example, MPI_INT may be 3). Get to know these constants.

  7. Discussion and notes (cont.) • This send is a blocked send. The next instructions in the program will NOT be executed until the send is done (the data is sent to the system, does NOT wait until the data has been received).

  8. Receive • MPI::COMM_WORLD.Recv(buf, count, datatype, source, tag, status) • Buf – where to put the message • Count – how many • Datatype – an mpi type for the count items in buf • Source – accept the message from this process (can be a wildcard for any process). • Tag- which type of message to accept (can be a wildcard for any type) • Status – optional, contains the source and tag for use if the tag and/or source args were wildcards.

  9. Minimal MPI • Each MPI program needs the following 6: • MPI::Init(argc, argv) – initialize MPI – set up the COMM_WORLD communicator • int MPI::COMM_WORLD.Get_size() – Number of processes • int MPI::COMM_WORLD.Get_rank() – which process am I? • Send • Recv • MPI::Finalize() – Terminate MPI

More Related