1 / 25

Using The Common Component Architecture to Design Simulation Codes

Using The Common Component Architecture to Design Simulation Codes. J. Ray, S. Lefantzi and H. Najm Sandia National Labs, Livermore. What is CCA. Common Component Architecture A component model for HPC and scientific simulations Modularity, reuse of code Lightweight specification

luce
Download Presentation

Using The Common Component Architecture to Design Simulation Codes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Using The Common Component Architecture to Design Simulation Codes J. Ray, S. Lefantzi and H. Najm Sandia National Labs, Livermore

  2. What is CCA • Common Component Architecture • A component model for HPC and scientific simulations • Modularity, reuse of code • Lightweight specification • Few, if any, HPC functionalities are provided. • Performance & flexibility, even at the expense of features. • Translation … • High single-CPU performance • Does not impose a parallel programming model • Does not provide any parallel programming services either • But allows one to do parallel computing

  3. The CCA model • Component • an “object” implementing a functionality (physical/chemical model, numerical algo etc) • Provides access via interfaces (abstract classes) • ProvidesPorts • Uses other components’ functionality via pointers to their interfaces • UsesPorts • Framework • Loads and instantiates components on a given CPU • Connects uses and provides ports • Driven by a script or GUI • No numerical/parallel computing etc support.

  4. Details of the architecture • 4 CCA-compliant frameworks exist • Ccaffeine (Sandia) & Uintah (Utah) used for HPC routinely • XCAT (Indiana) & decaff (LLNL) mostly in distributed computing environments • CCAFFEINE • Component writer deals with performance issues • Also deals with MPI calls amongst components • Supports SPMD computing; no distributed computing yet. • Allows language interoperability.

  5. A CCA code

  6. Pictorial example

  7. Guidelines regarding apps • Hydrodynamics • P.D.E • Spatial derivatives • Finite differences, finite volumes • Timescales • Length scales

  8. Solution strategy • Timescales • Explicit integration of slow ones • Implicit integration of fast ones • Strang-splitting

  9. Solution strategy (cont’d) • Wide spectrum of length scales • Adaptive mesh refinement • Structured axis-aligned patches • GrACE. • Start with a uniform coarse mesh • Identify regions needing refinement, collate into rectangular patches • Impose finer mesh in patches • Recurse; mesh hierarchy.

  10. A mesh hierarchy

  11. App 1. A reaction-diffusion system. • A coarse approx. to a flame. • H2-Air mixture; ignition via 3 hot-spots • 9-species, 19 reactions, stiff chemistry • 1cm X 1cm domain, 100x100 coarse mesh, finest mesh = 12.5 micron. • Timescales : O(10ns) to O(10 microseconds)

  12. App. 1 - the code

  13. Evolution

  14. Details • H2O2 mass fraction profiles.

  15. App. 2 shock-hydrodynamics • Shock hydrodynamics • Finite volume method (Godunov)

  16. Interesting features • Shock & interface are sharp discontinuities • Need refinement • Shock deposits vorticity – a governing quantity for turbulence, mixing, … • Insufficient refinement – under predict vorticity, slower mixing/turbulence.

  17. App 2. The code

  18. Evolution

  19. Convergence

  20. Are components slow ? • C++ compilers << Fortran compilers • Virtual pointer lookup overhead when accessing a derived class via a pointer to base class • Y’ = F ; [ I - t/2 J ] Y = H(Yn) + G(Ym) ; used Cvode to solve this system • J & G evaluation requires a call to a component (Chemistry mockup) • t changed to make convergence harder – more J & G evaluation • Results compared to plain C and cvode library

  21. Components versus library

  22. Really so ? • Difference in calling overhead • Test : • F77 versus components • 500 MHz Pentium III • Linux 2.4.18 • Gcc 2.95.4-15

  23. Scalability • Shock-hydro code • No refinement • 200 x 200 & 350 x 350 meshes • Cplant cluster • 400 MHz EV5 Alphas • 1 Gb/s Myrinet • Worst perf : 73 % scaling eff. For 200x200 on 48 procs

  24. Summary • Components, code • Very different physics/numerics by replacing physics components • Single cpu performance not harmed by componentization • Scalability – no effect • Flexible, parallel, etc. etc. … • Success story …? • Not so fast …

  25. Pros and cons • Cons : • A set of components solve a PDE subject to a particular numerical scheme • Numerics decides the main subsystems of the component assembly • Variation on the main theme is easy • Too large a change and you have to recreate a big percentage of components • Pros : • Physics components appear at the bottom of the hierarchy • Changing physics models is easy. • Note : Adding new physics, if requiring a brand-new numerical algorithm is NOT trivial.

More Related