1 / 11

Sending large message counts (The MPI_Count issue)

Sending large message counts (The MPI_Count issue). Motivation – initial requests. HP received several requests to scale applications in a way that required sending messages greater than 2GB. Did not want to change their Fortran source

tessa
Download Presentation

Sending large message counts (The MPI_Count issue)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Sending large message counts(The MPI_Count issue)

  2. Motivation – initial requests • HP received several requests to scale applications in a way that required sending messages greater than 2GB. • Did not want to change their Fortran source • compiled with –I8 (compiler flags to promote ALL integers to be 64-bytes) • Also wanted Scalapack that would scale similarly. • requested that the MPI implementation “handle” this.

  3. Motivation – HP’s partial response • Fortran: • Produced an –I8 compatible mode • Determined at run time if application was compiled with –I8 (and/or –R8) • Correctly interpret arguments as either 32 or 64-bit integers • Dynamically defines MPI data types • Internally MPI library always uses 64-bit integers for all count related arguments. • C code: • HP-MPI produced new C interfaces (MPI_SendL) to allow modification of Scalapackand other libraries.

  4. A complete solution • Do we want a more complete solution: • Works for all languages • Does not rely on compiler flags or non-standard API’s

  5. #0 – Fortran only solution • Proposal #1: • Do nothing. • Let MPI’s choose to support –I8 (See Platform MPI for an example) • Fortran applications that need to use long counts much use –I8 • Pros: • No work for the Forum • Cons: • Fortran only solution • Big hammer even for Fortran since it may double memory use

  6. #1 – more API’s • Proposal #1: • Create new interfaces that accept a 64-bit count. • Example: MPI_SendL • The count could simply be a long int, but preferably a new type: MPI_Count that can be changed based on what the implementation supports • Pros: • No backward compatibility issues • Fixes the problem NOW. • Cons: • Apps must be re-written to access the new capability • Explosion of interfaces

  7. #2 – Sneak in MPI_Count • Proposal #2: • Change relevant functions to use MPI_Count • C++ • function overloading handles it (assuming we care about C++) • Never need to recompile unless you use MPI_Count. • C • automatic type casting handles it. • No recompile is even needed if MPI_Count is defined as an int and application passes int’s. • Fortran • must be “told” the size of the MPI_Count arguments. Need agreement between application and library. • constants must be changed to declared parameters: old: call MPI_Send(buf, 10, MPI_INTEGER, 1, 2, MPI_COMM_WORLD, ierr) new: integer*8 ten parameter (ten= 10) call MPI_Send(buf, ten, MPI_INTEGER, 1, 2, MPI_COMM_WORLD, ierr)

  8. #2 – Sneak in MPI_Count • Proposal #2: • Pros: • Only impacts users who want 64-bit applications • Cons: • Fortran77 application re-write is ugly. • Existing C applications “work but are not MPI compliant” • There are really 2 Fortran interfaces for every affected call: C: intMPI_Send(void* buf, MPI_Count count, MPI_Datatypedatatype, intdest, int tag, MPI_Commcomm) Fortran (if implementation is using 32-bit counts) MPI_SEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, IERROR)<type> BUF(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, IERROR Fortran (if implementation is using 64-bit counts) MPI_SEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, IERROR)<type> BUF(*) INTEGER DATATYPE, DEST, TAG, COMM, IERROR INTEGER*8 COUNT

  9. #3 – Drop Fortran 77 support • Proposal #3: • Much of the problem with Proposal #2 is the “ugliness” of the Fortran 77 solution. • If we go with Fortran90/95 then we have two options • Use TYPE() to create an MPI_Count type • This enters new territory for MPI standard since all Fortran arguments are currently simple types. OR • Just allow implementation to specify INTEGER*4 or INTEGER*8 and let casting handle it. (Requires a Fortran wrapper to each routine written in Fortran?)

  10. #3 – Drop Fortran 77 support • Proposal #3a: Use TYPE to declare a fortran “MPI_Count” • Pros: • A solution exists for C, C++ and Fortran • Cons • Like C, existing code now “works but isn’t MPI standard compliant” • We’ve never used a TYPE for any other Fortran arguments • Proposal #3b: Use automatic type matching in both C and Fortran 90 (no MPI_Count) • MPI_Send now takes a long int in C and an INTEGER*8 in Fortran 90/95 • Pros: • Existing code is MPI Compliant! • User’s can pass ints or long ints, INTEGER*4 or INTEGER*8 and everything will get converted for them! • Cons: • Did I mention we would have to drop fortran 77 support? Or… not support long counts in fortran77…

  11. #Final – Only change C, C++ and Fortran 90/95 • Although cores are getting more plentiful and relatively less powerful, I think we can expect hybrid programming models to increase the need for large off-node message counts – we should do something. • Don’t drop Fortran 77, just make it take a different size of integer than Fortran 90/95. • Using MPI_Count for C/C++ is optional in my opinion and may be detrimental because it makes people fearful that they have to change their code.

More Related