1 / 13

Design and Performance of the MPAS-A Non-hydrostatic atmosphere model

Design and Performance of the MPAS-A Non-hydrostatic atmosphere model. Michael Duda 1 and Doug Jacobsen 2 1 National Center for Atmospheric Research*, NESL 2 Los Alamos National Laboratories, COSIM. *NCAR is funded by the Nationa l Science Foundation.

ojal
Download Presentation

Design and Performance of the MPAS-A Non-hydrostatic atmosphere model

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Design and Performance of the MPAS-A Non-hydrostatic atmosphere model Michael Duda1and Doug Jacobsen2 1National Center for Atmospheric Research*, NESL 2Los Alamos National Laboratories, COSIM *NCAR is funded by the National Science Foundation

  2. WHAT IS the Model for Prediction across scales? These models use a centroidalVoronoi tessellation (CVT) with a C-grid staggering for their horizontal discretization *The MPAS infrastructure handles general (conformal?) unstructured horizontal grids! Prognostic velocities are velocities normal to cell faces (“edges”) at the point where the edge intersects the arc joining cells on either side • from Ringler et al. (2008) A collaboration between NCAR (MMM) and LANL (COSIM) to develop models for climate, regional climate, and NWP applications: MPAS-Atmosphere (NCAR) MPAS-Ocean (LANL) MPAS-Ice (LANL) MPAS framework, infrastructure*(NCAR, LANL)

  3. MPAS software architecture • Driver layers– The high-level DRIVER calls init, run, finalize methods implemented by the core-independent SUBDRIVER; DRIVER can be replaced by a coupler, and SUBDRIVER can include import and export methods for model state • 2. MPAS core – The CORE contains science code that performs the computational work (pre-, post-processing, model time integration, etc.) of MPAS; each core’s implementation lives in a separate sub-directory and is selected at compile-time • 3. Infrastructure – The infrastructure provides data types used by the core and the rest of the model infrastructure, communication, I/O, and generic computational operations on CVT meshes such as interpolation • Arrows indicate interaction between components of the MPAS architecture

  4. Parallel decomposition Graph partitioning • We use the Metis package for parallel graph decomposition • Currently done as a pre-processing step, but could be done “on-line” • Metis also handles weighted graph partitioning • Given a priori estimates for the computational costs of each grid cell, we can better balance the load among processes The dual mesh of a Voronoi tessellation is a Delaunay triangulation – essentially the connectivity graph of the cells Parallel decomposition of an MPAS mesh then becomes a graph partitioning problem: equally distribute nodes among partitions (give each process equal work) while minimizing the edge cut (minimizing parallel communication)

  5. Parallel decomposition (2) Given an assignment of cells to a process, any number of layers of halo (ghost) cells may be added Block of cells owned by a process Cells are stored in a 1d array (2d with vertical dimension, etc.), with halo cells at the end of the array; the order of real cells may be updated to provide better cache re-use Block plus one layer of halo/ghost cells With a complete list of cells stored in a block, adjacent edge and vertex locations can be found; we apply a simple rule to determine ownership of edges and vertices adjacent to real cells in different blocks Block plus two layers of halo/ghost cells

  6. Model infrastructure I/O: Provides parallel I/O through an API that uses infrastructure DDTs • High-level interface for creating “streams” (groups of fields that are read/written at the same time from/to a file) • Underlying I/O functionality is provided by CESM’s PIO PARALLELISM: Implements operations on field typesneeded for parallelism • E.g., add halo cells to a block, halo cell update, all-to-all • Callable from either serial or parallel code (no-op for serial code) • For multiple blocks per process, differences between shared-memory and MPI are hidden OPERATORS: Provides implementations of general operations on CVT meshes • Horizontal interpolation via RBFs, 1d spline interpolation • Ongoing work to add a generic advection operator for C-grid staggered CVTs

  7. The MPAS registry The need to support different cores in the MPAS framework suggests that the developer of a core would need to write “copy-and-paste” code to handle aspects of each field such as: Field definition Allocation/deallocation of field structures Halo setup I/O calls

  8. The MPAS Registry For dynamics-only non-hydrostatic atmosphere model, Registry generates ~23,200 lines of code vs5100lines hand-written for dynamics and 23,500 lines hand-written for infrastructure MPAS has employed a computer-aidedsoftware engineering tool (the “registry”) to isolate the developer of a core from the details of the data structures used inside the framework • An idea borrowed from the WRF model(Michalakes (2004)) • The Registry is a “data dictionary”: each field has an entry providing meta-data and other attributes (type, dims, I/O streams, etc.) • Each MPAS core is paired with its own registry file • At compile time, a small C program is first compiled; the program runs, parses registry file, and generates Fortran code • Among other things, creates code to allocate, read, and write fields

  9. Role of the registry in coupling field { name:u dimensions:nVertLevels,nCells units:”m s-1” description:”normal velocity on cell faces” coupled-write:true coupled-read:needed } The registry can generate more than just Fortran code – anything we’d like it to generate based on registry entries, in fact! • Information for meta-datadriven couplers • Documentation (similar idea to doxygen for generating source-code documentation) The syntax of the MPAS registry files is easily changed or updated • Could be extended to permit additional attributes and metadata • We’re considering a new format for the registry files to accommodate richer metadata

  10. MPAS-A scalability MPAS-A scaling 60-km mesh, yellowstone and bluefire MPAS-A scaling – 60-km mesh, yellowstone The full MPAS-A solver (physics+dynamics, no I/O) achieves >75% efficiency down to about 160 owned cells per MPI task

  11. MPAS-A scalability MPAS-A (60-km mesh, yellowstone) timing breakdown Lower bound for the number of ghost cells in two halo layers, Ng, is where No is the number of owned cells No = 80 -> Ng = 76 No = 40 -> Ng = 57 Halo communication (“comm”) accounts for a rapidly growing share of the total solver time Physics are currently all column-independent and scale almost perfectly Redundant computation in the halos limit scalability of the dynamics

  12. Strategies for minimizing communication costs • Aggregate halo exchanges for fields with the same stencil • Not currently implemented • In MPAS-A, limited areas where we exchange multiple halos at the same time; restructuring of the solver code might help • Use one MPI task per shared-memory node, and assign that task as many blocks as there are cores on the node • Supported already in the MPAS infrastructure • Initial testing underway in MPAS-O and the MPAS shallow water model; block loops parallelized with OpenMP • Overlap computation and communication by splitting halo exchanges into a begin phase and an end phase with non-blocking communication • Prototype code has been written to do this; looks promising • Restructuring of the MPAS-A solver might improve opportunities to take advantage of this option • At odds with aggregated halo exchanges?

  13. Summary • MPAS is a family of Earth-system component models sharing a common software framework • Infrastructure should be general enough for most horizontally unstructured (conformal?) grids • Extensive use of derived types enable simple interfaces to infrastructure functionality • Besides PIO, we’ve chosen to implement functionality “from scratch” • The Registry mechanism in MPAS could be further leveraged for maintenance of metadata and coupling purposes • We’re just beginning to experiment with new approaches to minimizing communication costs in the MPAS-A solver • Any improvements to infrastructure can be leveraged by all MPAS cores

More Related