1 / 19

Software Integration Status and Plans

Software Integration Status and Plans . Gábor Tóth UM CRASH Team, Ann Arbor, MI October 29 , 2010. This talk is a status update and part of our response to the review team recommendations in the V&V area. Outline of Talk Software architecture and modeling schema

dimaia
Download Presentation

Software Integration Status and Plans

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Software Integration Status and Plans Gábor Tóth UM CRASH Team, Ann Arbor, MI October 29, 2010

  2. This talk is a status update and part of our responseto the review team recommendations in the V&V area • Outline of Talk • Software architecture and modeling schema • New algorithms and work in progress • Testing strategy • Example tests • Version Control • Documentation UQ integration discussed by Paul Drake and James Holloway

  3. Software Architecture and Current Algorithms CRASH CRASH Initialization BATSRUS 5-material hydro with EOS Flux-limited gray/multi-group diffusion Flux limited electron heat conduction 2D AMR (RZ) or 3D AMR (XYZ) Split semi-implicit time stepping Blurred synthetic radiograph Laser package, external EOS tables, Improve parallel I/O. Adjoint method. Flat file(s): (,u,p,Te,Tr,Er,g,m)(x,r) HYADES Multi-material hydro with EOS Multi-group flux limited diffusion Electron heat conduction Laser heating 1D or 2D Lagrangian More flexible grid options Parallel comm.: (Er,Sre,Srm)(x,y,z) Parallel comm.: (rho,u,p,Te,m)(x,y,z) UQ PDT Multi-group discrete ordinates 3D or2D RZ adaptive grid Implicit time stepping, TBDF2 Krylov methods for absorption Nearly optimal sweep scheduling Multigroup diffusion, improved I/O Physics informed emulator Data reduction Postprocessor statistical analysis Notation blue color: work in progress red color: added this year

  4. New Algorithms • Full energy conservation for electron/radiation physics • Total energy update but apply slope limiter on primitive variables • Electron flux limiter • limit Spitzer-Harm flux by fraction of free-streaming heat flux • 5-material EOS and opacity calculations • Xe, Be, Au, acrylic, polyimide • Use levelset with signed distance • Block Adaptive Tree Library (BATL) • Efficient dynamic AMR in 1, 2 and 3D • Split semi-implicit scheme • Solve each energy group and electrons independently. • Requires less memory. Allows PCG. Sometimes faster. • Synthetic radiographs with blurring • Add Poisson noise due to finite photon count. • Smooth at the scale that corresponds to the pinhole size.

  5. Work in Progress • Laser package • Alternative to HYADES. Full control. • Allows 3D initialization. • External EOS/Opacity tables • Alternative to the inline EOS/opacity calculations • Collaboration with Marcel Klapisch and Michel Busquet • Multi-level preconditioning • Using HYPRE library • Should scale better than BILU preconditioner. • Adjoint method • Reverse execution of CRASH simulation with a series of restart files. Provides sensitivity information. • Implemented but needs to be tested. • Improved parallel I/O options • HDF5, striped I/O access. Prototype implemented. • Allows visualization with various software packages. • Potentially faster output from code. • More robust feature recognition in radiographs • Based on thresholds and integral quantities

  6. CRASH Adjoint Solver • Purpose of the adjoint • Accelerate UQ through efficient calculation of the sensitivity of an output to a large number of inputs and parameters

  7. Verification and Testing • New program units are implemented with unit tests • Nightly execution of many unit tests • New features are implemented together with verification tests • Daily (18-hour) verification & full system tests are run on a 16-core Mac. • Tests should cover all aspects of the new feature including restart. • Using grid convergence studies and model-model comparison. • Compatibility and reproducibility of features are checked with the functionality test suite • Nightly runs ensure that bugs are discovered early • Running on8different platforms/compilers on 1 to 4 cores: tests portability • Parallel Scaling Tests • Weekly scaling test on 128 and 256 cores of hera. • Many-core runs require DAT. Reveals software and hardware issues. • Confirms that results are independent of the number of cores.

  8. Nightly unit and functionality tests: 96 SWMF tests 65 BATSRUS tests 4 unique to CRASH including restart

  9. Daily CRASH Verification and Full System Tests 29 verification tests: pass if converge at the correct rate 15 full system tests: pass if results have not changed includes restart

  10. Verification Test: electron physics • Initial condition is a semi-analytic radiation solution of R. Lowrie • Use electron EOS that mimics the radiation energy: Ee=aTe4 • Energy exchange and heat conduction follow Kramer’s formula: ρTe-3.5 • Solution is a supercritical Mach 5 shock with a thermal front advected on a rotated AMR grid

  11. Verification Test: 1D electron physics Thermal front Hydro shock relaxation

  12. Full system test:5 materials and 30 groups with electron physics on R-Z grid. • 6 micron effective resolution with 1 AMR • Unsplit vs. split semi-implicit schemes: 2017s vs. 1167s wall-clock time Blurred radiograph

  13. Full System Test: 3D gray with elliptical nozzle Synthetic radiograph from Y Y = 0 cut Z Z = 0 cut Synthetic radiograph from Z Y

  14. Strong Parallel Scaling of CRASH 2.1 3D semi-implicit multigroup-diffusion and heat conduction. 2 levels of dynamic AMR, 2.6 million cells in 4x4x4 blocks. Pleiades Hera

  15. Weak Parallel Scaling of PDT on hera • Per core: 2048 cells with 8 spatial unknowns, 80 directions and 10 energy groups. • 1 to 12288 cores (maximum of 1.6 1011 unknowns)

  16. Weak Parallel Scaling of PDT on hera

  17. Version Release • Development is done in a single CVS branch (HEAD version) • Code version number • Incremented when code behavior is significantly modified • Saved with runs • Code is tagged in CVS before and after major changes • Allows recovery of previous version • Allows comparison of results, performance, portability etc. • Release: stable versions are created as CVS tags or branches • Released versions 1.0, 1.1, 2.0 and 2.1 so far. • Includes BATSRUS, CRASHTEST and CRASH_data repositories. • The released stable versions are used by UQ.

  18. Several Layers of Documentation • CRASH and UQ primers • Documentation of new algorithms • Developed during preliminary discussions • added to CVS when code is committed • Software Development Standards document • Quality requirements, testing procedures, data naming etc. • Documentation of changes in the CVS logs • Required for every change, generates an email • Documentation in the source code • XML description of input parameter produces most of the user manual • User manual describes full usage including examples • CRASH full system tests serve as working examples

  19. Concluding Remarks • We have made substantial progress with our algorithms • We use good software engineering practices • Unit tests are developed with new code • Nightly functionality tests • Daily verification test suite covering all aspects of CRASH physics • All CVS changes generate notification email to developers • Future algorithmic development • Driven by efficiency and functionality requirements • Driven by UQ • Interaction with UQ software • Through scripts

More Related