1 / 31

CS 591x

CS 591x. Overview of MPI-2. Major Features of MPI-2. Superset of MPI-1 Parallel IO (previously discussed) Standard Process Startup Dynamic Process Management Remote Memory Access. MPI-2. MPI-1 includes no specifications for a process executor Left to individual implementations

iren
Download Presentation

CS 591x

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 591x Overview of MPI-2

  2. Major Features of MPI-2 • Superset of MPI-1 • Parallel IO (previously discussed) • Standard Process Startup • Dynamic Process Management • Remote Memory Access

  3. MPI-2 • MPI-1 includes no specifications for a process executor • Left to individual implementations • usually “mpirun” • even mpirun can vary across implementations • options, parameters, keywords can be different

  4. MPI-2 • MPI-2 includes a recommendation for a standard method to start MPI processes • The result – mpiexec • mpiexec arguments and parameters have standard meaning • standard = portable

  5. mpiexec arguments • -n [numprocesses] • number of processes requested (like –n in mpirun) • mpiexec –n 12 myprog • -soft [minprocesses] • start job with minprocesses processes if –n processes are not available • mpiexec –n 12 –soft 6 myprog

  6. mpiexec arguments • -soft [n:m] • a soft request can be a range • mpiexec –n 12 –soft 4:12 myprog • -host [hostname] • requests execution on a specific host • mpiexec –n 4 –host node4 myprog • -arch [archname] • start the job on a specific architecture

  7. mpiexec arguments • -file [filename] • requests job to run per specifications contained in filename • mpiexec –file specfile • supports the execution of multiple executables

  8. Remote Memory Access • Recall that in MPI-1- • message passing is essentially a push operation • the sender has to initiate the communications, or • actively participate in the communication operation (collective communications) • communications is symetrical

  9. Remote Memory Access • How would you handle a situation where: • process x decides that it needs the value in variable a in process y… • … and process y does not initiate an communication operation

  10. Remote Memory Access • MPI-2 has the answer… • Remote Memory Access • allows a process to initial and carryout an asymmetrical communications operation… • …assuming the processes have setup the appropriate objects • windows

  11. Remote Memory Access int MPI_Win_create( void* var, MPI_Aint size, int disp_unit, MPI_Info info, MPI_Comm comm, MPI_Win* win)

  12. Remote Memory Access var – the variable to appear in the window size – the size of the var disp_units – displacement units info – key-value pairs to express “hints” to MPI-2 on how to do the Win_create comm – the communicator that can share the window win – the name of the window object

  13. Remote Memory Access int MPI_Win_fence( int assert, MPI_Win win) assert – usually 0 win - the name of the window

  14. Remote Memory Access int MPI_Get( void* var, int count, MPI_Datatype datatype, int target_rank, MPI_Aint displacement, int target_count, MPI_Datatype target_datatype, MPI_Win win)

  15. Remote Memory Access int MPI_Win_Free( MPI_Win* win)

  16. Remote Memory Access int MPI_Accumulate( void* var, int count, MPI_Datatype datatype, int target_rank, MPI_Aint displace, int target_count, MPI_Datatype target_datatype, MPI_Op operation, MPI_Win win)

  17. Remote Memory Access MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD, &nprocs); MPI_Comm_rank(MPI_COMM_WORLD, &myrank); if(myrank==0) { MPI_Win_create(&n, sizeof(int), 1, MPI_INFO_NULL, MPI_WORLD_COMM, &nwin); } else { MPI_Win_create(MPI_BOTTOM, 0, 1, MPI_INFO_NULL, MPI_WORLD_COMM, &nwin); } ………….

  18. Remote Memory Access MPI_Win_fence(0, nwin); if (myrank != 0) MPI_Get(&n, 1, MPI_INT, 0, 0, 1, MPI_INT, nwin); MPI_Win_Fence(0, nwin);

  19. Remote Memory Access • BTW--- • there is a MPI_Put also

  20. Dymanic Process Management • In MPI-1, recall that- • process creation is static • all processes in the job are created when the job initializes • the number of processes in the job never vary as job execution progresses

  21. Dynamic Process Management • MPI-2 allow the creation of new processes within the application— • called spawning • helps to understand intercomms

  22. Dynamic Process Creation int MPI_Comm_spawn( char* command, char* argv[], int maxprocs, MPI_Info info, int root, MPI_Comm* intercomm, int* errorcodes[]);

  23. Dynamic Process Creation int MPI_Comm_get_parent( MPI_Comm* parent); retrieves communicators parent communicator

  24. Dynamic Process Creation int MPI_Intercomm_merge( MPI_Comm intercomm, int high, MPI_Comm new_intracomm)

  25. Dynamic Process Creation … MPI_Init(&argc, &argv); makehostlist(argv[1], “targets”, &num_hosts); MPI_Info_create( &hostinfo); MPI_Info_set(hostinfo, “file”, “targets”); sprintf(soft_limit, “0:%d”, num_hosts); MPI_Info_set(hostinfo, “soft”, soft_limit); MPI_Comm_spawn(“pcp_slave”, MPI_ARGV_NULL, num_hosts, hostinfo, 0, MPI_COMM_SELF, &pcpslaves, MPI_ERRORCODES_IGNORE); MPI_Info_free( &hostinfo ); MPI_Intercomm_merge( pcpslaves, 0, &all_procs); ….

  26. Dynamic Process Creation … // in spawned process MPI_Init( &argc, &argv ); MPI_Comm_get_parent( &slavecomm); MPI_Intercomm_merge( slavecomm, 1,&all_procs); … // now like intracomm…

  27. Dynamic Process Creation – Multiple Executables int MPI_Comm_spawn_multiple( int count, char* commands[], char* cmd_args[], int* maxprocs[], MPI_Info info[], int root, MPI_Comm comm, MPI_Comm * intercomm, int* errors[])

  28. Dynamic Process Creation-Multiple executables - sample char *array_of_commands[2] = {"ocean","atmos"}; char **array_of_argv[2]; char *argv0[] = {"-gridfile", "ocean1.grd", (char *)0}; char *argv1[] = {"atmos.grd", (char *)0}; array_of_argv[0] = argv0; array_of_argv[1] = argv1; MPI_Comm_spawn_multiple(2, array_of_commands, array_of_argv, ...); from:http://www.epcc.ed.ac.uk/epcc-tec/document_archive/mpi-20-htm

  29. So, What about MPI-2 • A lot of existing code in MPI-1 • MPI-1 meets a lot of scientific and engineering computing needs • MPI-2 implementation not as wide spread as MPI-1 • MPI-2 is, at least in part, an experimental platform for research in parallel computing

  30. MPI-2 ..for more information • http://www.mpi-forum.org/docs/

More Related