1 / 13

Optimization_fff as a Data Product

Optimization_fff as a Data Product. J.McTiernan HMI/AIA Meeting 16-Feb-2006. Optimization method: Wheatland, Roumeliotis & Sturrock, Apj, 540, 1150. Objective: minimize the “Objective Function”. We can write:. If we vary B, such that dB/dt = F, and dB/dt = 0 on the

ninon
Download Presentation

Optimization_fff as a Data Product

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Optimization_fff as a Data Product J.McTiernan HMI/AIA Meeting 16-Feb-2006

  2. Optimization method: Wheatland, Roumeliotis & Sturrock, Apj, 540, 1150 Objective: minimize the “Objective Function” We can write: If we vary B, such that dB/dt = F, and dB/dt = 0 on the boundary, then L will decrease.

  3. Optimization method (cont): • Start with a box. The bottom boundary is the magnetogram, the upper and side boundaries are the initial field. Typically start with potential field or linear FFF, extrapolated from magnetogram. • Calculate F, set new B = B + F*dt (typical dt =1.0e-5). B is fixed on all boundaries. • “Objective function”, L, is guaranteed to decrease, but the change in L (ΔL) becomes smaller as iterations continue. • Iterate until ΔL approaches 0. • The final extrapolation is dependent on all boundary conditions and therefore on the initial conditions. • Requires a vector magnetogram, with 180 degree ambiguity resolved.

  4. Optimization method Idl code: • Online as an SSW package, see http://www.lmsal.com/solarsoft/. • Slow, a test case with 64x64x64 cube took 157 minutes (3.2 GHz Linux processor 4 Gbytes RAM) (Currently I am committed to writing Fortran version, should pick up a factor of 5 to 10? T.Wiegelmann’s code is faster.) • Users can specify 2d or 3d input, and all spatial grids. • Spatial grids can be variable. • Code can use spherical coordinates. (But, /spherical has no automatic initialization – so the user needs to specify all boundaries. Also /spherical is relatively untested.) • Uncertainties? Some tests (See S.Solanki talk from this meeting, J.McTiernan SHINE 2005 poster.) These will have to be characterized, to get a uncertainty as a function of height, and field strength in the magnetogram.

  5. Speed test #1: 64x64x64 cube, used Low-Lou solution, potential extrapolation for initial field, side and upper boundaries: Total time = 157 min Per iteration = 3.06 sec 32x32x32 cube, same Low-Lou solution, potential extrapolation for initial field, side and upper boundaries: Total time = 10 min Per iteration = 0.32 sec So Per iteration, time scales as N3 (or NT) Total time scales as N4 (or NT4/3)

  6. Speed test #2: 115x99x99 cube, AR 9026 IVM data, potential extrapolation for initial field, side and upper boundaries: Total time = 67 min Per iteration = 6.43 sec 231x198x41 cube, AR 9026 IVM data, potential extrapolation for initial field, side and upper boundaries, variable z grid : Total time = 95 min Per iteration = 10.5 sec Per iteration, time still scales as NT Total time scales as less than NT? Fewer iterations, and larger final L value for variable grid case…

  7. Memory: Memory usage is not a problem, in comparison to speed. Memory usage will scale with NT. Typically, you have 7 three-D arrays held in memory: For 3 components of B, 3 components of J, and div B. For the 115x99x99 cube, this is 31 Mbytes (7*4 bytes*NT) For a 512x512x32 cube, this is 58 Mbytes. For a 512x512x512 cube, this is 3.76 Gbytes

  8. In the pipeline? Say you want to provide extrapolations as a data product, and that the code is a factor of 10 faster than this one, so that it takes 6.7 min to do the 115x99x99 cube, and that processing time scales as NT4/3. Say that there are 5 AR’s, and we have 512x512 boxes containing each one. A 512x512x32 extrapolation takes approximately 97 min. Is 1 extrapolation per AR per day an option? If you want a large-scale extrapolation, for as large a chunk of the sun that you have ambiguity resolution, but with reduced spatial resolution (5 arcsec?) – maybe 200x200x200, scaled to so that the solution reaches 2.5 solar radii. This would take 91 minutes. Maybe 1 of these per day.

  9. In the pipeline? Now we have 6 extrapolations. Depending on how many CPU’s are available to be dedicated to this, it’ll take about 90 minutes to 9 hours to process 1 day of data. Nine hours is too long, so for this plan to work, so 2 or 3 CPU’s are needed. If the code can be parallelized, maybe this can be made to run faster.

  10. Conclusions? We would be happy if we could get 1 extrapolation per day per AR plus 1 larger-scale extrapolation. Or maybe a few large-scale extrapolations per day, and forget about individual AR’s. Tools for extrapolations should be provided to users.

  11. Fig 1: Photospheric magnetogram, AR 10486, 29-OCT-2003 1712 UT (from K.D. Leka) Bx By Bz Chromospheric magnetogram, AR 10486, 29-OCT-2003 1846 UT (from T.R. Metcalf) Bx By Bz

  12. Fig 2: Field extrapolations, via optimization method (Wheatland, Sturrock, Roumelotis 2000) Top: Photospheric Bottom: Chromospheric

  13. Fig 5: Average fractional uncertainty in the extrapolated field, assuming uncertainty of 100/50G in chromospheric Bxy/Bz and 50/25 G in photospheric Bxy/Bz, from a monte carlo calculation:

More Related