1 / 15

Survey of MPI Call Usage

Survey of MPI Call Usage. Daniel Han, USC. Terry Jones, LLNL. August 12, 2004. UCRL-PRES-206265. Outline. Motivation About the Applications Statistics Gathered Inferences Future Work. Motivation. Info for App developers Information on the expense of basic MPI functions (recode?)

simsjoseph
Download Presentation

Survey of MPI Call Usage

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Survey of MPI Call Usage Daniel Han, USC Terry Jones, LLNL August 12, 2004 UCRL-PRES-206265

  2. Outline • Motivation • About the Applications • Statistics Gathered • Inferences • Future Work

  3. Motivation • Info for App developers • Information on the expense of basic MPI functions (recode?) • Set expectations • Many tradeoffs available in MPI design • Memory allocation decisions • Protocol cutoff point decisions • Where is additional code complexity worth it? • Information on MPI Usage is scarce • New tools (e.g. mpiP) make profiling reasonable • Easy to incorporate (no source code changes) • Easy to interpret • Unobtrusive observation (little performance impact)

  4. About the applications… Amtran: discrete coordinate neutron transport Ares: instability 3-D simulation in massive star supernova envelopesArdra: neutron transport/radiation diffusion code exploring new numerical algorithms and methods for the solution of the Boltzmann Transport Equation (e.g. nuclear imaging).Geodyne: eulerian adaptive mesh refinement (e.g. comet-earth impacts) IRS:solves the radiation transport equation by the flux-limiting diffusion approximation using an implicit matrix solutionMdcask: molecular dynamics codes for study in radiation damage in metalsLinpack/HPL:solves a random dense linear system.Miranda: hydrodynamics code simulating instability growthSmg:a parallel semicoarsening multigrid solver for the linear systems arising from finite difference, volume, or finite element discretizationsSpheral: provides a steerable parallel environment for performing coupled hydrodynamical & gravitational numerical simulations http://sourceforge.net/projects/spheralSweep3d: solves a 1-group neuron transport problemUmt2k: photon transport code for unstructured meshes

  5. Overall for sampled: 60% MPI 40% remaining app Percent of time to MPI

  6. Top MPI Point-to-Point Calls

  7. Top MPI Collective Calls

  8. Comparing Collective and Point-to-Point

  9. Average Number of Calls for Most Common MPI Functions “Large” Runs

  10. Communication Patternsmost dominant msgsize

  11. Communication Patterns (continued)

  12. Frequency of callsites by MPI functions

  13. Scalability

  14. Observations Summary • General • People seem to scale code to ~60% MPI/communication • Isend/Irecv/Wait many times more prevalent than Sendrecv and blocking send/recv • Time spent in collectives predominantly divided among barrier, allreduce, broadcast, gather, and alltoall • Most common msgsize is typically between 1K and 1MB • Surprises • Waitany most prevalent call • Almost all pt2pt messages are the same size within a run • Often, message size decreases with large runs • Some codes driven by alltoall performance

  15. Future Work & Concluding Remarks • Further understanding of apps needed • Results for other test configurations • When can apps make better use of collectives • Mpi-io usage info needed • Classified applications • Acknowledgements • mpiP is due to Jeffrey Vetter and Chris Chambreau http://www.llnl.gov/CASC/mpip • This work was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48.

More Related