1 / 10

Center for Programming Models for Scalable Parallel Computing: Project Meeting Report

Libraries, Languages, and Execution Models for Terascale Applications www.pmodels.org William D. Gropp www.mcs.anl.gov/~gropp Argonne National Laboratory. Center for Programming Models for Scalable Parallel Computing: Project Meeting Report. Participants. Coordinating Principal Investigator:

MikeCarlo
Download Presentation

Center for Programming Models for Scalable Parallel Computing: Project Meeting Report

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Libraries, Languages, and Execution Modelsfor Terascale Applicationswww.pmodels.org William D. Groppwww.mcs.anl.gov/~gropp Argonne National Laboratory Center for Programming Models for Scalable Parallel Computing:Project Meeting Report

  2. Participants Coordinating Principal Investigator: • Ewing Lusk – Argonne National Laboratory Co-Principal Investigators (Laboratories): • William Gropp – Argonne National Laboratory • Ricky Kendall – Ames Laboratory • Jarek Nieplocha – Pacific Northwest National Laboratory Co-Principal Investigators (Universities): • Barbara Chapman – University of Houston • Guang Gao – University of Delaware • John Mellor-Crummey – Rice University • Robert Numrich – University of Minnesota • Dhabaleswar Panda – Ohio State University • Thomas Sterling – California Institute of Technology • Marianne Winslett – University of Illinois • Katherine Yelick – University of California, Berkeley

  3. Problem Statement • Problem: Current programming models have enabled development of scalable applications on current large-scale computers, but the application development process itself remains complex, lengthy, and expensive, obstructing progress in scientific application development. • Solution: Facilitate application development by providing standard libraries, convenient parallel programming languages, and petaflops-targeted advanced programming models. • Goals: An array of attractive options for convenient, efficient, development of scalable, efficient scientific applications for terascale computers

  4. A Three-Pronged Approach to Next-Generation Programming Models • Extensions to existing library-based models • MPI (-2; extensions) • Global Arrays and extensions • Portable SHMEM • Robust implementations of language-based models • UPC • Co-Array Fortran • Titanium • OpenMP optimizations • Advanced models for advanced architectures • Multithreaded, PIM-based machines, Gilgamesh, etc.

  5. Application Programming Models Message Passing Remote Memory Shared Memory Mixed Models Language Extensions New Models Model Instances MPI MPI-2 GA GPSHMEM OpenMP OpenMP + MPI CAF UPC Titanium EARTH Implementation Substrate Common Runtime ADI-3 ARMCI Panda Parallel I/O CAF Packages/ Modules Open64 Compiler HDF-5 Communication Firmware MPP Switches VIA Myrinet Infiniband Relationships Among the Parts

  6. Libraries • Libraries for the remote memory access model • MPI and MPI-2 • Global Arrays • GA combine higher-level model with efficiency for application convenience • GP-SHMEM • Popular Cray T3E model made portable • Co-Array Fortran library • Object-based scientific library, written in CAF

  7. Languages • Three languages providing a software global address space (suitable for distributed memory) and parallelism • CAF (Co-Array Fortran) • UPC (Unified Parallel C) • Titanium (parallel Java) • One language for shared memory • Scalable OpenMP • The Open64 compiler infrastructure • Industrial strength compiler for C, Fortran 9x, C++ • Used in the above projects • One contribution to the community

  8. Cross-Project Infrastructure • Runtime communication approaches • Exploiting NICs in support of parallel programming models • ARMCI • GASNet • I/O • Active buffering in Panda • MPI-IO and parallel file systems • Integrating active buffering into ROMIO implementation of MPI-IO • Scalable I/O for parallel languages • UPC • CAF I/O

  9. New Programming Models • Defining a new execution model • Semantics first • Define for performance • Must provide the enormous benefit Bill Camp mentioned • Define to support best algorithms in support of applications • Define for likely HPC hardware, including • Many (zillions) processors • Deep memory hierarchy • Some hardware support for programming model • Likely to have some kind of precisely relaxed memory consistency model • Common feature of all of the high performance libraries and languages in the project (even OpenMP) • Experiments with new concepts such as percolation (move program to data instead of data to program)

  10. Connections With Other Programs • Applications from SciDAC, NSF/PACI, etc. • DARPA HPCS Program • John Mellor-Crummey (Rice) for HP • Bob Numrich (UMN) for SGI • Thomas Sterling (JPL/Caltech) for Cray • Kathy Yelick (Berkeley) for SUN • Guang Gao (U Delaware) IBM • ANL a member of Cray Affiliates program • Open64 Community • OpenMP (U Houston formed a company to join ARB, since only companies can be members ) • IBM Blue Gene/L and QCDoC • More…

More Related