1 / 13

Input/Output APIs and Data Organization for High Performance Scientific Computing

Input/Output APIs and Data Organization for High Performance Scientific Computing. November 17, 2008 Petascale Data Storage Institute Workshop at Supercomputing 2008 Jay Lofstead (GT), Fang Zheng (GT), Scott Klasky (ORNL), Karsten Schwan (GT). Overview. Motivation Architecture Performance.

cameo
Download Presentation

Input/Output APIs and Data Organization for High Performance Scientific Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Input/Output APIs and Data Organization for HighPerformance Scientific Computing November 17, 2008 Petascale Data Storage Institute Workshop at Supercomputing 2008 Jay Lofstead (GT), Fang Zheng (GT), Scott Klasky (ORNL), Karsten Schwan (GT) Jay Lofstead (lofstead@cc.gatech.edu)

  2. Overview • Motivation • Architecture • Performance Jay Lofstead (lofstead@cc.gatech.edu)

  3. Motivation • Many codes write lots of data, but rarely read • TBs of data • different types and sizes • HDF-5 and pNetCDF used • convenient • tool integration • portable format Jay Lofstead (lofstead@cc.gatech.edu)

  4. Performance/Resilience Challenges • pNetCDF • “right sized” header • coordination for each data declaration • data stored as logically described • HDF-5 • b-tree format • coordination for each data declaration • single metadata store vulnerable to corruption. Jay Lofstead (lofstead@cc.gatech.edu)

  5. Architecture (ADIOS) Scientific Codes External Metadata (XML file) ADIOS API buffering schedule feedback DART HDF-5 MPI-IO pnetCDF POSIX IO Viz Engines LIVE/DataTap Others (plug-in) • Change IO method by changing XML file • Switch between synchronous and asynchronous • Hook into other systems like visualization and workflow Jay Lofstead (lofstead@cc.gatech.edu)

  6. Architecture (BP) Process Group 1 Process Group 2 ... Process Group n Process Group Index Vars Index Attributes Index Index Offsets and Version # • Individual outputs into “process group” segments • Metadata indices next • Index offsets and version flag at end Jay Lofstead (lofstead@cc.gatech.edu)

  7. Resilience Features • Random node failure • timeouts • mark index entries as suspect • Root node failure • scan file to rebuild index • use local size values to find offsets Jay Lofstead (lofstead@cc.gatech.edu)

  8. Data Characteristics • Identify file contents efficiently • min/max • local array sizes • Local-only makes it “free” • no communication • Indices for summaries/direct access • copies for resilience Jay Lofstead (lofstead@cc.gatech.edu)

  9. Architecture (Strategy) • ADIOS API for flexibility • Use PHDF-5/PNetCDF during development for “correctness” • Use POSIX/MPI-IO methods (BP output format) during production runs for performance Jay Lofstead (lofstead@cc.gatech.edu)

  10. Performance Overview • Chimera (supernova) (8192 processes) • relatively small writes (~1 MB per process) • 1400 seconds pHDF-5 vs. 1.4 seconds POSIX (or 10 seconds MPI-IO independent) • GTC (fusion) (29,000 processes) • 25 GB/sec (out of 40 GB/sec theoretical max) writing restarts • 3% of wall clock time spent on IO • > 60 TB of total output Jay Lofstead (lofstead@cc.gatech.edu)

  11. Performance Overview • Collecting characteristics unmeasurable • 10, 50, 100 million entry arrays per processes • 128-2048 processes • weak scaling Jay Lofstead (lofstead@cc.gatech.edu)

  12. Performance Overview • Data conversion • Chimera 8192 process run took 117 seconds to convert to HDF-5 (compare 1400 seconds to write directly) on a single process • Other tests have shown linear conversion performance with size Parallel conversion will be faster... Jay Lofstead (lofstead@cc.gatech.edu)

  13. Summary • Use ADIOS API • selectively choose consistency • BP intermediate format • performance • resilience • later convert to HDF-5/NetCDF Questions? Jay Lofstead (lofstead@cc.gatech.edu)

More Related