1 / 42

Multi-Physics with AstroBEAR 2.0

AstroBEAR 2.0 is a multi-physics code developed at the University of Rochester, written primarily in Fortran and parallelized with MPI. It supports various physics and implements Riemann solvers, reconstruction methods, and more. Visit http://clover.pas.rochester.edu for more information.

mklein
Download Presentation

Multi-Physics with AstroBEAR 2.0

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multi-Physics with AstroBEAR 2.0 Jonathan Carroll-Nellenback University of Rochester

  2. Actively developed at the University of Rochester • Written primarily using Fortran • Parallelized with MPI • Grid-based AMR with arbitrary sized grids • Conserves mass, momentum, energy and B • Implements various Riemann solvers, reconstruction methods, etc… • Supports thermal conduction, self-gravity, sink particles, resistivity, viscosity, and various cooling functions. • Integrated trac-wiki system with extensive documentation, ticketing system, blog posts, etc… • http://clover.pas.rochester.edu

  3. Outline • Parallelization of AMR code AstroBEAR • Threaded AMR • Scaling Results • Examples of Multi-Physics with AstroBEAR • Interaction between magnetized clumps and radiative shocks. • Anistropic heat conduction • Magnetic Towers • Disk formation around binaries • Effect of inhomogeneities within Colliding flows

  4. O A S

  5. O A S P R O A S A S

  6. A S P R O O A A S S O A S P R P R P R O A S A S O A S A S O A S A S A S A A S S P R P P R R O A S A S O O A A S S A A S S

  7. A A S S P P R R O A A S S O O A A S S P P R R P P R R O O A A S S A A S S O A S P R O O A A S S A A S S A A A A S S S S P P P P R R R R O O O O A A A A S S S S A A A A S S S S

  8. Level 0 Threads O A S Level 1 Threads P R O A S A S Level 2 Threads Level 2 Threads P P R R O A S A S O A S A S P R P R P R P R O A S A S O A S A S O A S A S O A S A S Control Thread

  9. O A S P R O A S A S P P R R O A S A S O A S A S P R P R P R P R A S O A S A S O A S A S O A S A S O A S A S P R O A S A S Serial Execution Good performance requires load balancing every step O A S P R O A S A S P P R R O O A S P R O A S A S A S A S P R P R Threaded Execution Good performance requires global load balancing across AMR levels O A S A S O A S A S

  10. Balancing each level requires having enough grids to distribute among all of the processors – or artificially fragmenting grids into smaller pieces.

  11. Global load balancing allows for greater parallelization and allows processors to have a few larger grids instead of multiple small grids.

  12. Weak Scaling (Efficiency)

  13. Weak Scaling (Speed)

  14. When is threading possible? • For some problems, time derivatives from coarse grids are needed to update fields on finer grids (ie Gravitational Potential) • If we want to thread the various level advances there are two options. • Use old time derivatives from the previous coarse step. Should work fine as long as value of derivatives at boundaries between levels has a “small” 2nd time derivative. • Lag lower level advances by 1 level time step (ie Advance level 0 from 2t to 3t while advancing level 1 from 1t to 2t, level 2 from .5t to 1.5t. Level 3 from .25t to 1.25 t and so on. Results in postponed restriction.

  15. Magnetized Clumps Problems involving magnetized clouds and clumps, especially their interaction with shocks are common in astrophysical environments and have been a topic of research in the past decade. Using AstroBEAR, we set up an initial state with magnetized clumps of different contained magnetic field configurations and drive strong shocks through them. • In the simulations… • Clump density contrast is 100, Wind Mach is 10. • The clump is magnetic dominated. • Radiative cooling is strong.

  16. The poloidal contained case opens up quickly with a shaft shaped core. Smaller clumps are formed after shock with a very turbulent downstream.

  17. The toroidal contained case collapses onto the axis, making the shocked clump resembles a “nose cone”.

  18. Magnetic Interfaces Interfaces between hot and cold magnetized plasmas exist in various astrophysical contexts. It is of interest to understand how the structure of the magnetic field spanning the interface affects the temporal evolution of the temperature gradient. We explore the relation between the magnetic field topology and the heat transfer rate by adding various fractions of tangled versus ordered field across a hot–cold interface that allows the system to evolve to a steady state. • In the simulations… • We set up a sharp hot-cold gradient with helical magnetic field surrounding the interface. • The rest of the box is filled with magnetic field aligned with the temperature gradient. • The anisotropic heat conduction only conducts heat along field lines. Li, S., Frank, A., Blackman, E. ApJ 2012 748 24

  19. Magnetic Interfaces • What we observe… • Initially, the heat transport only happens locally around the interface, since it is confined by the small scale field loops. • As the simulation goes on, the helical field loops begin to open up and reconnect, which serve as channels for heat transport from hot region to cold region. • By adding straight field components to the helical field loops, the heat transfer rate of the interface can be speeded up.

  20. Magnetic towers Initial conditions z Stellar jets. Density = 100 cm-3 Temperature = 10000 K γ =5/3 V = 0 km s-1 Magnetic fields only within the central region. Jet only develops because of magnetic pressure gradients. Continuous Poynting flux injection. r rinj <---------1200 AU --------->

  21. Adiabatic case

  22. Binary-formed disks primary secondary • 3D, cubic co-rotating grid with cartesian coordinates • AMR only about the secondary • Inflow wind solution BC (-x,+y,+-z) to simulate the primary’s AGB with spherical wind (v~10km/s, mass loss~10-5Msun/yr) and M1=1.5Msun • Outflow boundary conditions (+x,-y) • Bound circular orbit • Sink particle to simulate the Secondary: accretor with M2=Msun • Separations within 10-40AU • gamma=1.000001; isothermal c. of mass co-rotating grid

  23. Colliding Flows

  24. Colliding Flows

  25. Colliding Flows

  26. Colliding Flows

  27. Energy Budgets

  28. Velocity Spectra

  29. Mass Growth Rates

  30. Core Mass Distribution

  31. PowerPoint Guidelines • Font, size, and color for text have been formatted for you in the Slide Master • Use the color palette shown below • See next slide for additional guidelines • Hyperlink color: www.microsoft.com Sample Fill Sample Fill Sample Fill Sample Fill Sample Fill Sample Fill

  32. PowerPoint TemplateSubtitle color • Example of a slide with a subhead • Set the slide title in “title case” • Set subheads in “sentence case” • Generally set subhead to 36pt or smaller so it will fit on a single line • The subhead color is defined for this template but must be selected. In PowerPoint 2007, it is the fourth font color from the left

  33. Bar Chart Example

  34. Pie Chart Example

  35. Line Chart Example

  36. Area Chart Example

  37. Demo Title demo Name Title Group

  38. Video Title video

  39. Partner Title partner Name Title Company

  40. Customer Title customer Name Title Company

  41. Announcement Title announcing

More Related