1 / 28

Workflow automation for processing plasma fusion simulation data

Workflow automation for processing plasma fusion simulation data. Norbert Podhorszki Bertram Ludäscher. University of California, Davis. Scott A. Klasky. Scientific Computing Group Oak Ridge National Laboratory. GPSC. C enter for P lasma E dge S imulation.

Download Presentation

Workflow automation for processing plasma fusion simulation data

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Workflow automation for processing plasma fusion simulation data Norbert PodhorszkiBertram Ludäscher University of California, Davis Scott A. Klasky Scientific Computing GroupOak Ridge National Laboratory GPSC

  2. Center for Plasma Edge Simulation • Focus on the edge of the plasma in the tokamak • Multi-scale, multi-physics simulation Edge turbulence in NSTX (@ 100,000 frames/s) Diverted magnetic field Works’07 Monterey, CA

  3. Images plasma physicists adore Electric potential Parallel flow and particle positions Works’07 Monterey, CA

  4. Monitoring the simulation means… Works’07 Monterey, CA

  5. Multi-physics → many codes Works’07 Monterey, CA

  6. XGC simulation output • Desired size of simulation (to be run on the petascale machine) • 100K time steps • 100 billion particles • 10 attributes (double precision) per particles = 8 TB data per time step • Save (and process) 1K-10K time steps • about 5 days run on the petascale Works’07 Monterey, CA

  7. XGC simulation output • Proprietary binary files (BP) • 3D variables, separate file per each timestep • NetCDF files containing • 2D variables, all timesteps in one file • M3D coupling data • to compute new equilibrium with external code (loose coupling) • to check linear stability of XGC externally Works’07 Monterey, CA

  8. What to do with those output? • Proprietary binary files (BP) • Transfer to end-to-end system using bbcp • Convert to HDF5 format (with a C program) • Generateimages using AVS/Express (running as service) • Archive HDF5 files in large chunks to HPSS • NetCDF files containing • Transfer to end-to-end system (updating as new timesteps are written into the files) • Generateimages using grace library • Archive NetCDF files at the end of simulation • M3D coupling data • Transfer to end-to-end system • Execute M3D: compute new equilibrium • Transfer back the new equilibrium to XGC • Execute ELITE: compute growth rate, test linear stability • Execute M3D-MPP: to study unstable states (ELM crash) Works’07 Monterey, CA

  9. Opteron cluster Cray XT4 Schematic view of components ORNL 40 GB/s HPSS Command & control site Works’07 Monterey, CA

  10. Opteron cluster Cray XT4 Command & control site Schematic view of components ORNL 40 GB/s HPSS Works’07 Monterey, CA

  11. Opteron cluster Schematic view of components ORNL Pull data Seaborg @ NERSC 40 GB/s Cray XT4 HPSS Command & control site Works’07 Monterey, CA

  12. 43 actors, 3 levels 196 actors, 4 levels 30 actors 206 actors, 4 levels 33 actors 137 actors 123 actors 150 66 actors 12 actors 243 actors, 4 levels • Kepler workflow • to accomplish all these tasks • 1239 (java) actors • 4 levels of hierarchy • many instances of ProcessFile and FileWatcher composite actors“workflow templates” Works’07 Monterey, CA

  13. bbcp ls -l bp2h5 Workflow – java - remote script - remote prg Works’07 Monterey, CA

  14. Kepler actors for CPES • Permanent SSH connection to perform tasks on a remote machine • Generalized actors (sub-workflows) for specified tasks: • Watch a remote directory for simulation timesteps • Execute an external command on a remote machine • Tar and archive data in large junks to HPSS • Transfer a remote image file and display on screen • Control a running SCIRun server remotely • Job submission and control to various resource managers • Above actors do logging/checkpointing • the final workflow can be stopped / restarted

  15. What Kepler features are used in CPES? • Different computational models • PN for parallelism and pipeline processing • DDF for sequential workflow with if-then-else and while loop structures • SDF for efficient (static schedule) sequential execution of simple sub-workflows • Stateful actors in stream processing of files • SSH for remote operations • keeps the connection alive • Command-line execution of the workflow • from a script (at deployment) (no GUI) • reading workflow parameters from a file Works’07 Monterey, CA

  16. FileWatcher: a data-dependent loop • SSH Directory Listing Java actor gives new files in a directory (once) • This is a do-while loop where the termination condition is whether the list contains a specific element (which indicates end of simulation) Works’07 Monterey, CA

  17. Modeling problem: stopping and finishing • You create working pipelines finally. Fine. • How do you stop them? • How do you let intermediate actors know that they will not receive more tokens? • How do you perform something “after” the processing? • We use a special token flowing through the pipelines • Always the last item in the pipeline. • Actors are implemented (extra work) to skip this token. • Stop file created by the simulation • to stop the “task generator” actors in the workflow (FileWatchers) • to notify (stateful) actors in the pipeline that they should finalize (Archiver, Stop_AVS/Express) • to synchronize on two independent pipelines (NetCDF+HDF5 → archive images at the end) Works’07 Monterey, CA

  18. Stop Role of stop file Works’07 Monterey, CA

  19. Wait for stop on both pipelines Stop Finalize Role of stop file Extra work after the end Works’07 Monterey, CA

  20. Problem: how to restart this workflow? • Kepler has no system-level checkpoint/restart mechanism (yet?) • seems to be difficult for large Java applications • not to mention the status of external (and remote) things. • Pipeline execution • each actor is processing a different step simultaneously Works’07 Monterey, CA

  21. Our solution: user-level logging/restart • We record • the successful operations at each (“heavy”) actor • Those actors • are implemented to check before doing something whether that has been done already • When the workflow is restarted • it starts from the very beginning, but the actors simply skip operations (files, tokens) that have already been done. • We do not worry about repeating small (control related) actions within the workflow • external operations are that matter here Works’07 Monterey, CA

  22. ProcessFile core: check-perform-record Works’07 Monterey, CA

  23. Problem: failed operations • What if an operation fails, e.g. one timestep cannot be transferred? Options: a) trust that they “fail” silently on missing data • notify everybody downstream in the pipeline (to skip) • mark token as “failed” c) avoid giving tasks to them for the erroneous step • Retrying later and processing that step is important but … • … keeping up with the simulation on the next steps is even more important Works’07 Monterey, CA

  24. Our approach for failed operations • ProcessFile and thus the workflow handles failures by discarding tokens related to failed operations from the stream • Advantage: • actors need not care about failures • an incoming token is a task to be done • Disadvantage • rate of token production varies • this can upset Kepler’s model of computation Works’07 Monterey, CA

  25. Discarding tokens on failure 3 2 1 transfer 1 convert 1 arch 1 failed 2 transfer 3 convert 3 arch 3 Works’07 Monterey, CA

  26. After a restart… 3 2 1 skip 1 skip 1 skip 1 transfer 2 convert 2 arch 2 skip 3 skip 3 skip 3 Works’07 Monterey, CA

  27. Future Plans • Provenance management • one main reason to use scientific workflow system e.g. in bioinformatics workflows • needed for debugging runs, interpreting results, repeat experiment, generate documentation, compare runs etc. • CPES workflow is selected as one use case for the ongoing Kepler provenance work • New actors in CPES for controlling asynchronous I/O from the petascale computer towards the processing cluster Works’07 Monterey, CA

  28. Thank You Questions?

More Related