1 / 13

The Magnetic Reconnection Code within the FLASH Framework

The Magnetic Reconnection Code within the FLASH Framework. Timur Linde, Leonid Malyshkin, Robert Rosner, and Andrew Siegel University of Chicago June 5, 2003 Princeton, NJ. Overview. FLASH project in general FLASH role in Magnetic Reconnection Code ( MRC ) development.

kamala
Download Presentation

The Magnetic Reconnection Code within the FLASH Framework

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Magnetic Reconnection Code within the FLASH Framework Timur Linde, Leonid Malyshkin, Robert Rosner, and Andrew Siegel University of Chicago June 5, 2003 Princeton, NJ

  2. Overview • FLASH project in general • FLASH role in Magnetic Reconnection Code (MRC) development

  3. What is FLASH? What is MRC? • Initially: AMR code for astrophysics problems on ASCI machines (compressible hydro + burning) • FLASH evolved into two things: • More general application code • A framework for building/hosting new problems FLASH physics modules + FLASH framework = FLASH application code Hall MHD modules + FLASH framework = Magnetic Reconnection Code Next: • What physics modules does FLASH contain? • What services does FLASH framework contain?

  4. FLASH breakdown • physics modules: (in)compressible hydro, relativistic hydro/MHD, resistive mhd, 2-D Hall mhd, (nuclear) reaction networks, time-dependent ionization, various equations of state, particles, self-gravity, Boltzmann transport, subgrid models, front-tracking • framework: block-structured AMR (Paramesh), parallel io (hdf5), runtime vis (pvtk), runtime performance monitoring (PAPI), generic linear solvers tied to mesh, syntax/tool for building new solvers • code support (public web-based) • flash_test • flash_benchmark • coding standard verification • bug/feature tracker • user support schedule • download: http://flash.uchicago.edu

  5. General features of FLASH • Three major releases over four years • 300,000+ lines (F90 / C / Python) • Good performance • Scalable on ASCI machines to 5K procs • Gordon Bell prize (2000) • Emphasis on portability, interoperability • Standardization of AMR output format, data sharing via CCA • Flash 2.3 • New release, scheduled June 1, 2003 • optimized multigrid solver • significant improvements in documentation • ported to Compaq TRU64 • 2-D runtime visualization • optimized uniform grid • support for different mesh geometries • FFT on uniform grid • optimized multigrid on uniform grid • paramesh3.0 • Parallel NetCDF i/o module • Implicit diffusion • Flash 2.4 • Final 2.x version (Sept 2004)

  6. FLASH foci • Four initial major emphases • Performance • Testing • Usability • Portability • Later progress in extensibility/reuse: Flash v3.x • Generalized mesh variable database • FLASH component model • FLASH Developer’s Guide

  7. The future of Flash • Take this a step further: identify the “actors” A. End-users Run an existing problem B. Module/problem contributors Use database Module interface but unaware of Flash internals C. Flash developers Work on general framework issues, utility modules, performance, portability, etc. according to needs of astrophysics and (laboratory) code validation. • Flash development successively focused on these 3 areas • Flash1.x: emphasis on A • Flash2.x: expand emphasis to B • Flash3.x: expand emphasis to C • Note: • Application scientists lean toward A. and B; programmers/software engineers lean toward C; computer scientists can be involved at any level • Everybody contributes to design process; software architect must make final decisions on how to implement plan.

  8. FLASH and CMRS • Follows typical pattern of FLASH collaborations • Prototyping, testing, results initially external to FLASH if desired • Iowa AMR-based Hall MHD – Kai Germaschewski • No “commitment” to FLASH • Interoperability strategy agreed upon • how are solvers packaged? • what data structures are used? • what operations must mesh support? component model

  9. CMRS/Flash strategy • Move portable components between FLASH/local framework as needs warrant • People strategy: • FLASH developer leading the FLASH single-fluid MHD work (Timur Linde) leads the Chicago MRC development • CMRS supports a postdoctoral fellow (Leonid Malyshkin) fully engaged in developing/testing the MRC • We also support a new graduate student (Claudio Zanni/U. Torino) working on the MRC and its extensions • Science strategy: • The immediate target of our efforts are on reconnection • Specifically: what is the consequence of relaxing the “steady state” assumption of reconnection - can one have fast reconnection in time-dependent circumstances under conditions in which steady reconnection cannot occur?

  10. Using FLASH • Some advantages of FLASH • tested nightly • constantly ported to new platforms • i/o optimized independently • visualization developed independently • documentation manager • user support • bug database • performance measured regularly • AMR (tested/documented independently) • coding standards enforcement scripts • debugged frequently (lint, forcheck) • sophisticated versioning, repository management • possible interplay with other physics modules (particles, etc.)

  11. Where are we now? • We have a working 3-D resistive/viscous AMR MHD code • Has already been used by R. Fitzpatrick in his study of compressible reconnection • MRC v1.0 exists • FLASH and 2-D Hall MHD have been joinedand are being tested • Required elliptic solves for Helmholtz, Poisson (i.e., multigrid) • Based on reusable components • This was done by importing the Iowa Hall MHD code as a “module”, but using our own Poisson and Helmholtz solvers; hence we solve exactly the same equations as the Iowa “local framework” • We are now running comparisons of MRC with the Iowa Hall MHD code • The next steps are • Inclusion of full 3-D Hall MHD, again implemented in a staged manner (almost completed) • More flexible geometry: cylindrical, toroidal

  12. Concluding remarks • Code emphases: • Standards of interoperability • Simple: common i/o formats – can reuse postprocessing tools • More complex: reusing solvers from one meshing package in another – libAMR (Colella) • More complex: standard interface for meshing package • Robustness, performance, portability, ease of use • Science emphases: • Focus is on an astrophysically-interesting and central problem • Problem is also highly susceptible to laboratory verification

  13. … which brings us to Questions and discussion

More Related