1 / 33

George H. Fisher Space Sciences Laboratory University of California, Berkeley

The Use of Vector Magnetogram Data in MHD Models of the Solar Atmosphere and Prospects for an Assimilative Model. George H. Fisher Space Sciences Laboratory University of California, Berkeley. What would an ``Assimilative Model’’ of the solar atmosphere consist of?.

alvis
Download Presentation

George H. Fisher Space Sciences Laboratory University of California, Berkeley

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Use of Vector Magnetogram Data in MHD Models of the Solar Atmosphere and Prospects for an Assimilative Model George H. Fisher Space Sciences Laboratory University of California, Berkeley

  2. What would an ``Assimilative Model’’ of the solar atmosphere consist of? A time-evolving physical model of the Sun’s atmosphere, or a portion of the Sun’s atmosphere, which can be corrected by time-dependent measurements that can be related in some manner to properties of the solar atmosphere. In particular, this means a 3D-MHD model of the Sun’s atmosphere, from photosphere to corona, that is updated by means of vector magnetograms.

  3. What are the most important elements of a physics-based model of the Sun? • Nearly all transient phenomena, such as solar-initiated “space weather” events, are driven by, or strongly affected by, magnetic fields. • A fluid treatment (MHD) is reasonable most of the time (except, probably, during solar flares). • Magnetic fields thread all layers of the Sun’s convection zone and atmosphere. • Maps of the estimated solar magnetic field (line-of-sight component) can be performed regularly in the photosphere. • In the near future, maps of all 3 components of the estimated magnetic field (vector magnetograms) will be taken regularly. • Vector magnetograms are essential for determining the free energy available in the solar atmosphere to drive violent phenomena. Without vector magnetograms, solar models are not meaningfully constrained.

  4. Schematic diagram of an assimilative model employing the Kalman filter approach: A is the physical model time-advance operator H operator relates state variable x to observable z K is the Kalman filter Q is estimated process or model error R is measurement error P is estimate of state variable error This Diagram taken from Welch and Bishop (2006) “An introduction to the Kalman Filter”

  5. Needed ingredients for an assimilative (e.g. Kalman filter) model of the solar atmosphere: • A reasonably good physical model • Measurements with a good enough time cadence and accuracy to be useful • A well-understood connection between physical and measured variables • A good understanding of the data and model errors Where do we stand with respect to these requirements?

  6. 1. What are the minimum requirements for a “reasonably good” physical model? • The model must accommodate the range of conditions from the photosphere, where magnetic fields can be routinely measured, into the corona, where ``space weather’’ events occur • The model must include the dominant terms in the energy equation that apply in the photosphere-corona system. The dominant terms are drastically different in the different parts of the domain • The model must be able to accommodate the wide range of physical and temporal scales from the photosphere to the corona. • The model must be able to accommodate vector magnetic field maps, as a time-dependent boundary condition. This is required whether or not the model is truly assimilative! Until recently, no existing models satisfied these requirements. Here is a brief summary of the challenges:

  7. Numerical challenges: • A dynamic numerical model extending from below the photosphere out into the corona must: • span a ~ 10 - 15 order of magnitude change in gas density and a thermodynamic transition from the 1 MK corona to the optically thick, cooler layers of the low atmosphere, visible surface, and below; • resolve a ~ 100 km photospheric pressure scale height while simultaneously following large-scale evolution (we use the Mikic et al. 2005 technique to mitigate the need to resolve the 1 km transition region scale height characteristic of a Spitzer-type conductivity); • remain highly accurate in the turbulent sub-surface layers, while still employing an effective shock capture scheme to follow and resolve shock fronts in the upper atmosphere • address the extreme temporal disparity of the combined system

  8. The Solar Photosphere The solar photosphere is an extremely thin, corrugated, and complex layer, in which the plasma β in strong field regions is of order unity. This is the layer in which magnetic fields can be measured most routinely. Hinode is a Japanese mission developed and launched by ISAS/JAXA, with NAOJ as domestic partner and NASA and STFC (UK) as international partners. It is operated by these agencies in co-operation with ESA and NSC (Norway)

  9. The solar corona: • The corona is a low-density, low-β, optically-thin, hot plasma • Plasma entrained within coronal loops evolves rapidly • compared to sub-surface structures • The magnetically-dominated corona can store energy over long • periods of time, but will often undergo sudden, rapid, and dramatic • topological changes as magnetic energy is released. • The size scale of coronal structures is generally much larger than the depth of the photosphere Movies courtesy of LMSAL, TRACE & LASCO consortia

  10. RADMHD (Abbett, 2007, ApJ, in press): Numerical techniques • We use a semi-implicit, operator-split method. • Explicit sub-step: We use a 3D extension of the semi-discrete method of Kurganov & Levy (2000) with the third order-accurate central weighted essentially non-oscillatory (CWENO) polynomial reconstruction of Levy et al. (2000). • CWENOinterpolation provides an efficient, accurate, simple shock capture scheme that allows us to resolve shocks in the transition region and corona without refining the mesh. The solenoidal constraint on B is enforced implicitly.

  11. RADMHD: Numerical techniques • We use a semi-implicit, operator-split method • Implicit sub-step: We use a “Jacobian-free” Newton-Krylov (JFNK) solver (see Knoll & Keyes 2003). The Krylov sub-step employs the generalized minimum residual (GMRES) technique. • JFNK provides a memory-efficient means of implicitly solving a non-linear system, and frees us from the restrictive CFL stability conditions imposed by e.g., the electron thermal conductivity and radiative cooling.

  12. Characteristics of the Quiet Sun model atmosphere: Note: Above movie is not a timeseries!

  13. Velocity determination at the photosphere for magnetogram-driven models Currently, we have been exploring driving the RADMHD code directly with sequences of vector magnetograms, without using any assimilative techniques. To do this, it is necessary to find the velocity field at the photospheric boundary, such that it is consistent with the vertical component of the magnetic induction equation. We have spent a lot of effort deriving and evaluating techniques to do this, with our leading techniques being MEF (Longcope et al) and ILCT (Welsch et al). In addition to its importance for driving the code, these techniques are useful on their own to derive Poynting and helicity fluxes directly from magnetogram observations.

  14. An example of magnetic evolution in an active region • NOAA AR 8210, May 1 1998 – 1 day of evolution seen by MDI

  15. Local Correlation Tracking • Central idea of LCT schema: find proper motions of features in a pair of successive images are by maximizing a cross-correlation function (or minimizing an error function) between sub-regions of the images. • The concept is generally attributed to November & Simon (1988). • Useful with G-band filtergrams, H images, or magnetograms. • The FLCT method (which we developed) is similar. For each pixel, we: • mask each image with a Gaussian, of width , centered at that pixel; • crop the resulting images, keeping only signficant regions; • compute the cross-correlation function between the two cropped images, using standard Fast Fourier Transform (FFT) techniques; • use 2nd order Taylor expansion to find the shifts in x and y that maximize the cross-correlation function to sub-arc-second precision; and • use the shifts in x and y and t between images to find the intensity features' apparent motion along the solar surface.

  16. Example of FLCT applied to NOAA AR 8210 (May 1 1998)

  17. The Demoulin & Berger (2003) Interpretation of LCT Apparent horizontal motion, ULCT , is from combination of hori- zontal motions and vertical motions acting on non-vertical fields.

  18. The Ideal MHD Induction Equation • How can we ensure that LCT-determined velocities are physically consistent with the magnetic induction equation? • Only the z-component of the induction equation contains no unobservable vertical derivatives: Now, substitute in the Demoulin & Berger hypothesis The ideal MHD induction equation simplifies to this form:

  19. I+LCT (ILCT): Use LCT to constrain solutions of the induction equation • Solve for , with 2D divergence and 2D curl (z-comp), and the approximation that U=ULCT: Let Note that if only Bz (or an approximation to it, BLOS) is known, we can still solve for , !

  20. Apply ILCT to IVM vector magnetogram data for AR 8210 • Vector magnetic field data enables us to find 3-D flow field from ILCT via the equations shown on the previous slide. Transverse flows are shown as arrows, up/down flows shown as blue/red contours.

  21. 2. Measurements of the magnetic field at the photosphere Slide courtesy of Tom Metcalf, CoRA/NWRA

  22. Slide courtesy of Tom Metcalf, CoRA/NWRA How is the vector magnetic field determined? • Magnetic fields will be split by Zeeman effect, but using the split itself not useful in most cases. Spot in 5250 A (normal Zeeman triplet)

  23. Zeeman Effect: Normal Zeeman Triplet σ π • Pi component is unshifted in wavelength (1) • Sigma components shifted to either side of pi component (2). σ • If the magnetic field is directed along the line of sight, the sigma components are left and right circularly polarized and the pi component is unpolarized. • If the magnetic field is directed perpendicular to the line of sight, the sigma and pi components have mutually orthogonal linear polarizations. Slide courtesy of Tom Metcalf, CoRA/NWRA

  24. How is Polarization Measured? • Polarization is measured as the difference between data obtained using two different polarizers. • For example a Wollaston prism or a calcite beam splitter produces two output beams of orthogonal linear polarization: I+Q,I-Q. • U and V follow in the same way with a retarder in the path. Slide courtesy of Tom Metcalf, CoRA/NWRA

  25. The Stokes Profiles • A magnetograph observes the Stokes profiles. • V/I is circular polarization and gives the LOS field • U/I and Q/I are linear polarization and give the transverse field Slide courtesy of Tom Metcalf, CoRA/NWRA

  26. Observed Stokes Profiles Stokes I Stokes Q Stokes U Stokes V • Na-D line observations from the IVM • They look more or less as expected with a few differences: • Noise is clearly present • prefilter distorts spectrum Relative Wavelength (nm) Slide courtesy of Tom Metcalf, CoRA/NWRA

  27. 3. Relationship between observed and measured variables - inverting the polarization observations • With the polarization in hand, how do we compute the magnetic field? • There are a number of methods • Direct measurement of line splitting • Fitting Stokes profiles • Weak field approximations • Calibration constant(s) • Different methods actually measure different quantities: the magnetic field or the flux density. Beware the difference! Slide courtesy of Tom Metcalf, CoRA/NWRA

  28. Inverting the Polarization Observations • The best method is to observe the Zeeman splitting directly. • Not generally possible for optical observations since the fields on the Sun are too weak. • The Zeeman splitting goes as so this works better in the IR. • Gives the magnetic field directly without worrying about the filling factor. • The next best method is to fit the Stokes profiles to the Unno profiles (Milne-Eddington atmosphere: source function linear with optical depth). • This gives the magnetic field, filling factor, thermodynamic parameters Slide courtesy of Tom Metcalf, CoRA/NWRA

  29. The 180 Degree Ambiguity • The observed transverse field is ambiguous by 180 degrees. • There are a number of ways to fix this but, as a practical matter, this is most difficult for the most interesting regions and easy for uninteresting regions. • Acute angle solution • Fast, will fail in complex active regions • Minimum energy solution • Very slow, but more robust • Correspondence with H-alpha fibrils • Generally accurate, but difficult to automate Slide courtesy of Tom Metcalf, CoRA/NWRA

  30. 4. Known sources of error in vector magnetograms: • Photon statistics • Polarization is computed as a difference of two signals • polarization cross talk • Polarization signal “leaks” between Stokes parameters • Is corrected on an instrument-by instrument basis • calibration constant • The calibration constant in magnetographs is very approximate and is not constant at all. • atmospheric seeing • Will induce spurious polarization, sometimes strong • polarization bias • Should be correctable in most instruments by looking at the continuum or regions of very weak field. • bad 180 degree ambiguity resolution (how to quantify??) Bottom line – vector magnetogram errors can be characterized, at least statistically Slide courtesy of Tom Metcalf, CoRA/NWRA

  31. Summary of assimilative model requirements • Reasonably good physical model – good progress! • Measurements with good time cadence, accuracy – rapidly improving! • Well-understood connection between physical variables and measurements – reasonably good • Understanding of data errors – reasonably good; understanding of model errors - unknown

  32. Issues that must be resolved for an assimilative solar MHD model • Currently, the data are used directly to determine the flow-field at the photosphere. How can this be made consistent with the Kalman “corrector” step, since the data have already been used? • Can the Kalman filter approach be used in a “sub-step” process to determine the photospheric velocity field instead of using the ILCT or MEF procedures? • Noise in the vector magnetogram data will probably introduce spurious Alfven waves into the model, even with the “filtering”. How do we cope with this? • How do we estimate “model” errors?

  33. Conclusions • Difference between assimilative and models directly driven by data (such as we are now performing): Assimilative models have the potential to accommodate data errors more consistently • Assimilative techniques are worth detailed investigation for solar MHD models. • There are other, simpler solar models that may be more immediately amenable to assimilation techniques.

More Related