1 / 88

Staggered mesh methods for MHD- and charged particle simulations of astrophysical turbulence

Staggered mesh methods for MHD- and charged particle simulations of astrophysical turbulence. Åke Nordlund Niels Bohr Institute for Astronomy, Physics, and Geophysics University of Copenhagen. Context examples. Star Formation The IMF is a result of statistics of MHD-turbulence

willem
Download Presentation

Staggered mesh methods for MHD- and charged particle simulations of astrophysical turbulence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Staggered mesh methods for MHD- and charged particle simulations of astrophysical turbulence Åke Nordlund Niels Bohr Institute for Astronomy, Physics, and Geophysics University of Copenhagen

  2. Context examples • Star Formation • The IMF is a result of statistics of MHD-turbulence • Planet Formation • Gravitational fragmentation (or not!) • Stars • Turbulent convection determines structure BCs • Stellar coronae & chromospheres • Heated by magnetic dissipation

  3. Charged particle contexts • Solar Flares • To what extent is MHD OK? • Particle acceleration mechanisms? • Reconnection & dissipation? • Gamma-Ray Bursts • Relativistic collisionless shocks? • Weibel-instability creates B? • Synchrotron radiation or gitter radiation?

  4. Overview • MHD methods • Godunov-like vs. direct • Staggered mesh vs. centered method • Radiative transfer • Fast & cheap methods • Charged particle dynamics • Methods & examples

  5. Solving the (M)HD Partial Differential Equations (PDEs) • Godunov-type methods • Solve the local Riemann problem (approx.) • OK in ideal gas hydro • MHD: 7 waves, 648 combos (cf. Schnack’s talk) • Constrained Transport (CT) • Gets increasingly messy when adding • gravity ... • non-ideal equation of state (ionization) ... • radiation ...

  6. Direct methods • Evaluate right hand sides (RHS) • High order spatial derivatives & interpolations • Spectral • Compact • Local stencils • e.g. 6th order derivatives, 5th order interpolations • Step solution forward in time • Runge-Kutta type methods (e.g. 3rd order): • Adams-Bashforth • Hyman’s method • RK3-2N • Saves memory – uses only F and dF/dt (hence 2N)

  7. Which variables? Conservative! • Mass • Momentum • Internal energy • not total energy • consider cases where magnetic or kinetic energy dominates • total energy is well conserved • e.g. Mach 5 supersonic 3D-turbulence test (Wengen) • less than 0.5% change in total energy

  8. Dissipation • Working with internal energy also means that all dissipation (kinetic to thermal, magnetic to thermal) must be explicit • Shock- and current sheet-capturing schemes • Negative part of divergence captures shocks • Ditto for cross-field velocity captures current sheets

  9. Advantages • Much simpler • HD ~ 700 flops / point (6th/5th order in space) • ENZO ~ 10,000 flops / point • FLASH ~ 20,000 flops / point • MHD ~ 1100 flops / point • Trivial to extend • Non-ideal equation-of-state • Radiative energy transfer • Relativistic

  10. Direct method: Disadvantages? • Smaller Courant numbers allowed • 3 sub-step limit ~ 0.6 (runs at 0.5) • 2 sub-step limit ~ 0.4 (runs at 0.333) • PPM typically runs at 0.8  factor 1.6 further per full step (unless directionally split) • Comparison of hydro flops • ~2,000 (direct, 3 sub-steps) • ~10,000 (ENZO/PPM, FLASH/PPM) • Need to also compare flops per second • Cache use?

  11. Perhaps much more diffusive? • 2D implosion test indicates not so • square area with central, rotated low pressure square • generates thin ’jet’ with vortex pairs • moves very slowly, in ~ pressure equilibrium • essentially a wrinkled 2D contact discontinuity • see Jim Stone’s test pages, with references

  12. 2D Implosion Test

  13. Imagine: non-ideal EOS + shocks + radiation + conduction along B • Ionization: large to small across a shock • Radiation: thick to thin across a shock • Heat conduction only along B • ... • Rieman solver? Any volunteers? • Operator and/or direction split? • With anisotropic resistivity & heat conduction?!

  14. Non-ideal EOS + radiation + MHD:Validation? • Godunov-type methods • No exact solutions to check against • Difficult to validate • Direct methods • Need only check conservation laws • mass & momentum, no direct change • energy conservation; easy to verify • Valid equations + stable methods • valid results

  15. Staggered Mesh Code(Nordlund et al) • Cell centered mass and thermal energy densities • Face-centered momenta and magnetic fields • Edge-centered electric fields and electric currents • Advantages: • simplicity; OpenMP (MPI btw boxes) • consistency (e.g., divB=0) • conservative, handles extreme Mach

  16. Code Philosophy • Simplicity • F90/95 for ease of development • Simplicity minimizes operator count • Conservative (per volume variables) • Can nevertheless handle SNe in the ISM • Accuracy • 6th/5th order in space, 3rd order in time • Speed • About 650,000 zone-updates/sec on laptop

  17. Code Development Stages • Simplest possible code • Dynamic allocation • No need to recompile for different resolutions • F95 array valued function calls • P4 speed is the SAME as with subroutine calls • SMP/OMP version • Open MP directives added • Uses auto-parallelization and/or OMP on SUN, SGI & IBM • MPI version for clusters • Implemented with CACTUS (see www.cactuscode.org) • Scales to arbitrary number of CPUs

  18. CACTUS • Provides • “flesh” (application interface) • Handles cluster-communication • E.g. MPI (but not limited to MPI) • Handles GRID computing • Presently experimental • Handles grid refinement and adaptive meshes • AMR not yet available • “thorns” (applications and services) • Parallel I/O • Parameter control (live!) • Diagnostic output • X-Y plots • JPEG slices • Isosurfaces

  19. MHD mhd.f90

  20. !---------------------------------------------------- ! Magnetic field's time derivative, dBdt = - curl(E) !---------------------------------------------------- call ddzup_set(Ey, scr1) ; call ddyup_set(Ez, scr2) !$omp parallel do private(iz) do iz=1,mz dBxdt(:,:,iz) = dBxdt(:,:,iz) + scr1(:,:,iz) - scr2(:,:,iz) end do call ddxup_set(Ez, scr1) ; call ddzup_set(Ex, scr2) !$omp parallel do private(iz) do iz=1,mz dBydt(:,:,iz) = dBydt(:,:,iz) + scr1(:,:,iz) - scr2(:,:,iz) end do call ddyup_set(Ex, scr1) ; call ddxup_set(Ey, scr2) !$omp parallel do private(iz) do iz=1,mz dBzdt(:,:,iz) = dBzdt(:,:,iz) + scr1(:,:,iz) - scr2(:,:,iz) end do SUBROUTINE mhd(CCTK_ARGUMENTS) USE hd_params USE stagger_params USE stagger IMPLICIT NONE DECLARE_CCTK_ARGUMENTS DECLARE_CCTK_PARAMETERS DECLARE_CCTK_FUNCTIONS CCTK_REAL, allocatable, dimension(:,:,:) :: & Jx, Jy, Jz, Ex, Ey, Ez, & Bx_y, Bx_z, By_x, By_z, Bz_x, Bz_y Example Code • Induction Equation • stagger-code/src-simple • Makefile (with includes for OS- and host-dep) • Subdirectories with optional code: • INITIAL (initial values) • BOUNDARIES • EOS (equation of state) • FORCING • EXPLOSIONS • COOLING • EXPERIMENTS • stagger-code/src (SMP production) • Ditto Makefile and subdirs • CACTUS_Stagger_Code • Code becomes a ”thorn” in the CACTUS ”flesh” SUBROUTINE mhd(eta,Ux,Uy,Uz,Bx,By,Bz,dpxdt,dpydt,dpzdt,dedt,dBxdt,dBydt,dBzdt) USE params USE stagger real, dimension(mx,my,mz) :: & eta,Ux,Uy,Uz,Bx,By,Bz,dpxdt,dpydt,dpzdt,dedt,dBxdt,dBydt,dBzdt !hpf$ distribute (*,*,block) :: & !hpf$ eta,Ux,Uy,Uz,Bx,By,Bz,dpxdt,dpydt,dpzdt,dedt,dBxdt,dBydt,dBzdt real, allocatable, dimension(:,:,:) :: & Jx,Jy,Jz,Ex,Ey,Ez, & Bx_y,Bx_z,By_x,By_z,Bz_x,Bz_y,scr1,scr2 !hpf$ distribute (*,*,block) :: & !hpf$ Jx,Jy,Jz,Ex,Ey,Ez, & !hpf$ Bx_y,Bx_z,By_x,By_z,Bz_x,Bz_y,scr1,scr2 !---------------------------------------------------- ! Magnetic field's time derivative, dBdt = - curl(E) !---------------------------------------------------- dBxdt = dBxdt + ddzup(Ey) - ddyup(Ez) dBydt = dBydt + ddxup(Ez) - ddzup(Ex) dBzdt = dBzdt + ddyup(Ex) - ddxup(Ey)

  21. Physics (staggered mesh code) • Equation of state • Qualitative: H+He+Me • Accurate: Lookup table • Opacity • Qualitative: H-minus • Accurate: Lookup table • Radiative energy transfer • Qualitative: Vertical + a few (4) • Accurate: Comprehensive set of rays

  22. Staggered Mesh Code Details • Dynamic memory allocation • Any grid size; no recompilation • Parallelized • Shared memory: OpenMP (and auto-) parallelization • MPI: Direct (Galsgaard) or via CACTUS • Organization – Makefile includes • Experiments • EXPERIMENTS/$(EXPERIMENT).mkf • Selectable features • Eq. of state • Cooling & conduction • Boundaries • OS and compiler dependencies hidden • OS/$(MACHTYPE).f90 • OS/$(HOST).mkf • OS/$(COMPILER).mkf

  23. Radiative Transfer Requirements • Comprehensive • Need at least 20-25 (double) rays • 4-5 frequency bins (recent paper) • At least 5 directions • Speed issue • Would like 25 rays to add negligible time

  24. BenchmarkTiming Results

  25. Altix Itanium-2 Scaling

  26. Applications • Star Formation • Planet Formation • Stars • Stellar coronae & chromospheres

  27. Star Formation Nordlund & Padoan 2002

  28. Key feature: intermittency! • What does it mean in this context? • Low density, high velocity gas fills most of the volume! • High density, low velocity features occupy very little space, but carry much of the mass! • How does it influence star formation? • It greatly simplifies understanding it! Collapsing features are relatively well defined! Inertial dynamics in most of the volume!

  29. Turbulence Diagnostics of Molecular Clouds Padoan, Boldyrev, Langer & Nordlund, ApJ 2002 (astro-ph/0207568)

  30. Numerical (2503 sim) & Analytical IMF Padoan & Nordlund(astro-ph/0205019)

  31. Low Mass IMF Padoan & Nordlund, ApJ 2004 (astro-ph/0205019)

  32. Planet formation; gas collapse

  33. Coronal Heating • Initial Magnetic Field • Potential extrapolation of AR 9114

  34. Coronal Heating: TRACE 195 Loops

  35. Current sheet hierarchy

  36. Current sheet hierarchy: close-up

  37. Scan through hierarchy: dissipation Note that all features rotate as we scan through – this means that these currents sheets are all curved in the 3rd dimension. Hm, the dissipation looks pretty intermittent– large nice empty areas to ignore with an AMR code, right?

  38. This is still the dissipation. Lets replace it by the electric current, as a check! Hm, not quite as empty, but the electric current is at least mostly weak, right? Electric current J

  39. J  log(J) So, let’s replace the current with the log of current, to see the levels of the hierarchy better!

  40. Log of the electric current Not really much to win with AMR here, if we want to cover the hierarchy!

  41. Solar & stellar surface MHD • Faculae • Sunspots • Chromospheres • Coronae

  42. Faculae:Center-to-LimbVariation

  43. Radiative transfer • ’Exact’ radiative energy transfer is notexpensive • allows up to ~100 rays per point for 2 x CPU-time • parallelizes well (with MPI or OpenMP) • Reasons for not using Flux Limited Diffusion • Not the right answer (e.g. missing shadows) • Is not cheaper

  44. Radiative Transfer: Significance • Cosmology • End of Dark Ages • Star Formation • Feedback: evaporation of molecular clouds • Dense phases of the collapse • Planet Formation • External illumination of discs • Structure and cooling of discs • Stellar surfaces • Surface cooling: the driver of convection

  45. Radiative transfer methods • Fast local solvers • Feautrier schemes; the fastest (often) • Optimized integral solutions; the simplest • A new approach to parallellizing RT • Solve within each domain, with no bdry radiation • Propagate and accumulate solutions globally

  46. Moments of the radiation field

  47. Phew, 7 variables!?! • Give up, adopting some approximation? • Flux Limited Diffusion • Did someone say ”shadows”?? • Or, solve as it stands? • Fast solvers • Parallelize • Did someone say ”difficult”?

  48. Rays Through Each Grid Point Interpolate source function to rays in each plane

  49. How many rays are needed? • Depends entirely on the geometry • For stellar surfaces, surprisingly few! • 1 vertical + 4 slanted, rotating • 1% accuracy in the mean Q • a few % in fluctuating Q • 8 rays / 48 rays • see plots

  50. 8 rays / 48 rays

More Related