1 / 20

MPI -2 Features Overview MPI Implementations

MPI -2 Features Overview MPI Implementations. University of North Carolina - Chapel Hill ITS Research Computing Instructor: Mark Reed Email : markreed@unc.edu. MPI-2.0 Standard since July 1997 Extends rather than replaces MPI-1.2

rafal
Download Presentation

MPI -2 Features Overview MPI Implementations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MPI -2 Features Overview MPI Implementations University of North Carolina - Chapel Hill ITS Research Computing Instructor: Mark Reed Email: markreed@unc.edu

  2. MPI-2.0 Standard since July 1997 Extends rather than replaces MPI-1.2 http://www.mpi-forum.org/docs/mpi-20-html/mpi2-report.html Implementations were slow to follow Reference: http://www.hlrs.de/organization/par/par_prog_ws/pdf/mpi_2_short.pdf

  3. Process Creation and Management One-Sided Communications I/O C++ Language bindings Extended Collective Operations External Interfaces Miscellany

  4. Dynamic Process Management MPI-1 is static Goal is to start new MPI processes Spawn interfaces: at initiators (parents): Spawning new processes is collective, returning an intercommunicator. Local group is group of spawning processes. Remote group is group of spawned processes. Spawn interfaces: at spawned processes (children): New processes have own MPI_COMM_WORLD MPI_Comm_get_parent() returns intercommunicator to parent processes

  5. One-sided Communication • PUT and GET data relative to memory of another process • Inherently more “dangerous” because of lack of synchronization • Subtle memory affects: • cache coherence • contiguous layout • Implemented with special synch calls surrounding the one-sided calls

  6. Parallel I/O reading and writing files in parallel Rich set of features: Basic operations: open, close, read, write, seek noncontiguous access in both memory and file logical view via filetype and element-type physical view addressed by hints, e.g. “striping_unit” explicit offsets / individual file pointers / shared file pointer collective / non-collective blocking / non-blocking or split collective non-atomic / atomic / explicit sync “native” / “internal” / “external32” data representation

  7. C++ Language bindings C++ bindings match the new C bindings MPI objects are C++ objects MPI functions are methods of C++ classes User must use MPI create and free functions instead of default constructors and destructors Uses shallow copy semantics (except MPI::Status objects) C++ exceptions used instead of returning error code declared within an MPI namespace (MPI::...) C++/C mixed-language interoperability

  8. Extended Collective Operations In MPI-1, collective operations are restricted to ordinary (intra) communicators. In MPI-2, most collective operations are extended by an additional functionality for intercommunicators e.g., Bcast on a parents-children intercommunicator: sends data from one parent process to all children. Provision to specify “ in place” buffers for collective operations on intracommunicators. Two new collective routines: generalized all-to-all exclusive scan

  9. External Interfaces Generalized Requests users can create new non-blocking operations Naming objects for debuggers and profilers label communicators, windows, datatypes Allow users to add error codes, classes and strings Specifies how threads are to be handled if the implementation chooses to provide them

  10. Miscellany Standard startup with mpiexec Recommended but not required Implementations are allowed to pass NULL to MPI_Init rather than argc, argv MPI_Finalized(flag) added for library writers New predefined datatypes MPI_WCHAR MPI_SIGNED_CHAR MPI_UNSIGNED_LONG_LONG

  11. MPI committee reconvened • MPI 2.1 done mid-year 2008 • provides a simple clarification of the current MPI 2.0 standard and corrections to the MPI 2.0 document, with no API changes. • MPI 2.2 planned for early 2009 • Addresses clear errors and omissions in the standard.

  12. MPI – 3.0 • MPI 3.0 targeted for early 2010 • will involve a more thorough rethinking of the standard to effectively support current and future applications. • Issues that have already been raised include improved one-sided communications as well as support for generalized requests, remote memory access, non-blocking collectives, new language support, and fault-tolerance.

  13. A few free MPI Variations MPICH, MPICH2 MPICH-G2 LAM/MPI Open-MPI MVAPICH

  14. MPICH flavors MPICH is a freely available, portable implementation of MPI from ANL and MSU http://www-unix.mcs.anl.gov/mpi/mpich/index.htm MPICH-G2, the Globus version of MPICH MPICH-G2 is a grid-enabled implementation MPI v1.1 standard used to couple multiple (heterogeneous) machines MPICH2 is an all-new implementation of MPI MPICH2 includes support for one-side communication, dynamic processes, intercommunicator collective operations, and expanded MPI-IO functionality.

  15. http://www.lam-mpi.org high-quality open-source implementation of MPI-1.2 and much of MPI-2 designed for heterogenous clusters MPI-2 Support (partial list) Process Creation and Management One-sided Communication (partial implementation) MPI I/O (using ROMIO) C++ Bindings MPI-2 Miscellany: mpiexec Thread Support (MPI_THREAD_SINGLE - MPI_THREAD_SERIALIZED) User functions at termination Language interoperability LAM-MPI

  16. Open-MPI Open MPI is a project combining technologies and resources from several other projects (FT-MPI, LA-MPI, LAM/MPI, and PACX-MPI) in order to build the best MPI library available. Goal: free, open source, peer-reviewed, production-quality, completely new MPI-2 compliant implementation Fault tolerance is a growing concern First release –November 2005 1.2.x out now http://www.open-mpi.org

  17. Implementation based on MPICH and MVICH Pronounced “em-vah-pich” Designed for high performance for MPI-1 and MPI-2 on InfiniBand as well as other RDMA-enabled interconnects MVAPICH is a high performance implementation of MPI-1 over InfiniBand based on MPICH1. MVAPICH2 is a high performance MPI-2 implementation based on MPICH2 Name chosen to reflect the fact that this is an MPI implementation based on MPICH (also MVICH) over the InfiniBand VAPI interface They also support other underlying transport interfaces for portability (uDAPL, OpenIB/Gen2, TCP/IP ) http://mvapich.cse.ohio-state.edu/index.shtml

  18. MPI Opinion … for discussion :) “Without fear of contradiction, the MPI standard has been the most significant advancement in practical parallel programming in over a decade, and it is the foundation of the vast majority of modern parallel programs.” Thom H. Dunning (NCSA), Jr, Robert J. Harrison and Jeffrey A. Nichols (ORNL) "NWChem: Development of a Modern Quantum Chemistry Program," CTWatch Quarterly, Volume 2, Number 2, May 2006.

  19. Opinion Cont. • “A completely consistent (and deliberately provocative) viewpoint is that MPI is evil. The emergence of MPI coincided with an almost complete cessation of parallel programming tool/paradigm research. This was due to many factors, but in particular to the very public and very expensive failure of HPF. The downsides of MPI are that it standardized (in order to be successful itself) only the primitive and alrady old communicating sequential process (CSP) programming model, and MPI’s success further stifled adoption of advanced parallel programming techniques since any new method was by definition not going to be as portable.” • Thom H. Dunning (NCSA), Jr, Robert J. Harrison and Jeffrey A. Nichols (ORNL) • "NWChem: Development of a Modern Quantum Chemistry Program," CTWatch Quarterly, Volume 2, Number 2, May 2006.

  20. More discussion … some MPI bashing “But it's hard to find a real fan of MPI today. Most either tolerate it or hate it. Although it provides a widely portable and standardized programming interface for parallel computing, its shortfalls are numerous: hard to learn, difficult to program, no allowance for incremental parallelization, doesn't scale easily, and so on. It's widely acknowledged that MPI'slimitations must be overcome to make parallel programming more accessible. “ Hence this class! :) From HPCWIRE

More Related