1 / 38

Computational Support for Parallel/Distributed AMR

Computational Support for Parallel/Distributed AMR. Manish Parashar The Applied Software Systems Laboratory ECE/CAIP, Rutgers University www.caip.rutgers.edu/~parashar/TASSL. Roadmap. Introduction to Berger-Oliger AMR Hierarchical Linked Lists (L. Wild) Overview of the GrACE Infrastructure

allan
Download Presentation

Computational Support for Parallel/Distributed AMR

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computational Support for Parallel/Distributed AMR Manish Parashar The Applied Software Systems Laboratory ECE/CAIP, Rutgers University www.caip.rutgers.edu/~parashar/TASSL

  2. Roadmap • Introduction to Berger-Oliger AMR • Hierarchical Linked Lists (L. Wild) • Overview of the GrACE Infrastructure • GrACE Programming Model and API • GrACE Design & Implementation • Current Research & Future Direction Manish Parashar

  3. Cactus and GrACE • Cactus + GrACE • Transparent access to AMR via Cactus • GrACE Infrastructure Thorn • AMR Driver Thorn • Status • Unigrid driver in place • AMR driver under development Manish Parashar

  4. Berger-Oliger Adaptive Mesh Refinement

  5. The AMR Concept • Problem: How to maximize the solution accuracy for a given problem size with limited computational resources ? • Solution: Use dynamically adaptive grids (instead of uniform grids) where the grid resolution is defined locally based on application features and solution quality. • Method: Adaptive Mesh Refinement (AMR) Manish Parashar

  6. Adaptively Griding the Application Domain Marsha Berger et al. (http://cs.nyu.edu/faculty/berger/) Manish Parashar

  7. Adaptive Grid Structure Manish Parashar

  8. Berger-Oliger AMR: Algorithm • Define adaptive grid structure • Define grid functions • Initialize grid functions • Repeat NumTimeSteps • if (RegridTime) Regrid at Level • Integrate at Level • if (Level+1 exists) Integrate at Level+1 Update Level from Level+1 • End Repeat Manish Parashar

  9. Berger-Oliger AMR: Grid Hierarchy Manish Parashar

  10. Hierarchical Linked Lists (HLL)

  11. HLL • AMR system devised by Lee Wild in 1996 • Grid points split into nodes of size refinement-factor in each direction • Refine on nodes • Avoids clustering problems needed by box based AMR schemes Manish Parashar

  12. Status of HLL • Lee wrote a shared memory version which was tested on various problems and showed excellent scaling properties. • It is currently being re-implemented as a standalone library with shared memory and MPI parallelism. This library will be used by a Cactus thorn to provide an AMR driver layer. Manish Parashar

  13. GrACE:An Framework for Distributed AMR

  14. GrACE: An Overview Manish Parashar

  15. Programming Interface • Coarse grained SPMD data parallelism • C++ driver • declares and defines computational domain and application variables in terms of GrACE programming abstractions • defines overall structure of the AMR algorithms • FORTRAN/FORTRAN 90/C computational kernels • defined on regular arrays Manish Parashar

  16. Programming Abstractions • Grid Hierarchy Abstraction • Template for the distributed adaptive grid hierarchy • Grid Function Abstraction • Application fields defined on the adaptive grid hierarchy • Grid Geometry Abstraction • High-level tools for addressing regions in the computational domain Manish Parashar

  17. (ubx, uby) dy (lbx, lby) dx Grid Geometry Abstractions • Coords • rank, x, y, z, ... • BBox • lb, ub, stride • BBoxList • Operations • union, intersection, cluster, refine/coarsen, difference, ... Manish Parashar

  18. GridHierarchy Abstraction • Attributes: • number of dimensions • maximum number of levels • specification of the computational domain • distribution type • refinement factor • boundary type/width GridHierarchy GH(Dim,GridType,MaxLevs) Manish Parashar

  19. GridFunction Abstraction • Attributes: • dimension and type • vector? • spatial/temporal stencils • associated GridHierarchy • prolongation/restriction functions • “shadow” specification GridFunction(DIM)<T> GF(“gf”, Stencils,GH,…) • alignments • ghost cells • boundary types/updates • interaction types • flux registers? • parent storage? Manish Parashar

  20. GridFunction Operations • GridFunction storage for a particular time, level, and component (and hierarchy) is managed as a Fortran 90 array object. GF(t, l, c, Main/Shadow) <op> Scalar GF(t, l, c, Main/Shadow) <op> GF2(….) RedOp(GF, t, l, Main/Shadow) • <op> : =,+=,-=,/+,*=,… • RepOp: Max, Min, Sum, Product, Norm,…. Manish Parashar

  21. Ghost Communications Sync (GF, Time,Level,Main/Shadow) Sync (GF, Time, Level,Axis,Dir,Main/Shadow) Sync (GH, Time,Level,Main/Shadow) • Ghost region communications based on GridFunction stencil attribute at the specified grid level Manish Parashar

  22. Region-based Communications Copy (GF, Time, Level, Reg1, Reg2, Main/Shadow) • Arbitrary copy (add, subtract) from Region1 to Region 2 the specified grid level. R1 R2 Manish Parashar

  23. Data-parallel forall operator forall (gf, time, level, component) Call FORTRAN Subroutine…... end_forall • Parallel operation for all grid components at a particular time step and level. Manish Parashar

  24. Refinement & Regriding • Encapsulates: • Generation of refined grids • Redistribution • Load-balancing • Data-transfers • Interaction schedules Refine(GH, Level, BBoxList) RecomposeHierarchy(GH) Manish Parashar

  25. Prolongation/Restriction Functions • Set prolong/restrict function for each GridFunction foreachGF(GH, GF, DIM, GFType) SetProlongFunction(GF, Pfunc); SetRestrictFunction(GF, Rfunc); end_forallGF • Prolong/Restrict Prolong(GF, TimeFrom, LevelFrom, TimeTo, LevelTo, Region, …., Main/Shadow); Restrict(GF, TimeFrom, LevelFrom, TimeTo, LevelTo, Region, …., Main/Shadow); Manish Parashar

  26. Checkpoint/Restart/Rollback • Checkpoint Checkpoint(GH,ChkPtFile); • Each GridFunction can be individually selected or deselected for checkpointing • Checkpoint files independent of # of processors • Restart ComposeHierarchy(GH,ChkPtFile); • Rollback RecomposeHierarchy(GH,ChkPtFile); Manish Parashar

  27. IO Interface • Initialize IO ACEIOInit(); • Select IO Type ACEIOType(GH, IOType); • IOType := ACEIO_HDF, ACEIO_IEEEIO,.. • BEGIN_COMPUTE/END_COMPUTE mark region not executed by a dedicated IO node • Do IO Write(GF, Time, Level, Main, Double); • End IO ACEIOEnd(GH); Manish Parashar

  28. Multigrid Interface • Determine the number of multigrid levels available MultiGridLevels(GH, Level, Main/Shadow); • Setup the multigrid hierarchy for a GridFunction SetUpMultiGrid(GF, Time, Level, MGLf, MGlc, Main/Shadow); SetUpMultiGrid(GF, Time, Level, Axis, MGlf, MGlc, Main/Shadow); • Do Multigrid GF(Time, Level, Comp, MGl, Main/Shadow)….; • Release multigrid hierachy ReleaseMultiGrid(GF, Time, Level, Main/Shadow); Manish Parashar

  29. GrACE: Design & Implementation

  30. Software Engineering in the Small: Design Principles • Separation of Concerns • policy from mechanisms • data management from solution methods • storage semantics from addressing and access • computer science from computational science from engineering • Hierarchical Abstractions • application specific programming abstractions • semantically specialized DSM • distributed shared objects • hierarchical, extendible index space + distributed dynamic storage Manish Parashar

  31. Application Application Components Programming Abstractions Dynamic Data-Management App. Objects HDDA Modules Kernels Grid Function Grid Structure Grid Geometry Grid Index Space Solver Cell Centered Main Hierarchy Region Mesh Storage Interpolator Vertex Centered Shadow Hierarchy Point Error Estimator Tree Access Face Centered Multigrid Hierarchy Clusterer Application Specific Method Specific Adaptive Data-Mgmt Separation of Concerns => Hierarchical Abstractions Manish Parashar

  32. Hierarchical Distributed Dynamic Array (HDDA) • Distributed Array • Preserve array semantics over distribution • Reuse FORTRAN/C computational components • Communications are transparent • Automatic partitioning & load-balancing • Hierarchical array • Each element can be a HDDA • Dynamic Array • HDDA can grow and shrink dynamically • Efficient data-management for adaptivity Manish Parashar

  33. HDDA Access Index Space Storage Expansion & Contraction Consistency Partitioning Communication Interaction Objects Data Objects Name Resolution Display Objects Separation of Concerns => Hierarchical Abstractions Manish Parashar

  34. Application Locality Index Locality Storage Locality Distributed Dynamic Storage Manish Parashar

  35. Partitioning Issues • Locality • Parallelism • Load-balance • Cost Manish Parashar

  36. Composite Distribution • Inter-grid communications are local • Data and task parallelism exploited • Efficient load redistribution and clustering • Overhead of generating & maintaining composite structure Manish Parashar

  37. IO & Visualization Manish Parashar

  38. Integrated Visualization & IO • Grid Hierarchy • Views: Multi-level, multi-resolution grid structure and connectivity, hierarchical and composite grid/mesh views, …. • Commands: Refine, coarsen, re-distribute, read, write, checkpoint, rollback, …. • Grid Function • Views: Multi/single-resolution plots, feature extraction and reduced models, isosurfaces, streamlines, etc…. • Commands: Read, write, interpolate, checkpoint, rollback, …. • Grid Geometry • Views: Wire-frames with resolution and ownership information • Commands: Read, write, refine coarsen, merge, …. Manish Parashar

More Related