1 / 7

Lustre -WAN : Enabling Distributed Workflows in Astrophysics

Lustre -WAN : Enabling Distributed Workflows in Astrophysics . Scott Michael scamicha@ indiana.edu. April 2011. Studying Planet Formation. My research group investigates the formation of giant planets There are more than 500 extra-solar gas giants that have been discovered to date

yan
Download Presentation

Lustre -WAN : Enabling Distributed Workflows in Astrophysics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lustre-WAN: Enabling Distributed Workflows in Astrophysics Scott Michael scamicha@indiana.edu April 2011

  2. Studying Planet Formation • My research group investigates the formation of giant planets • There are more than 500 extra-solar gas giants that have been discovered to date • Investigations are carried out using a 3D radiative hydrodynamics code • Each simulation takes several weeks and produces 5-10 TB of data • A series of 3-5 simulations is needed for each study • Data is analyzed to produce a variety of measurements

  3. Our Workflow Before Using Lustre-WAN • Our workflow has three parts • Simulation – shared memory • Analysis – distributed memory • Visualization – proprietary software with interactivity • In the past we have transferred data between HPC resources, stored the data locally, and performed analysis and visualization

  4. Using a Central Lustre File System • Using Indiana University’s Lustre based Data Capacitor WAN file system different systems in separate parts of the workflow can all access the same data • Data generated by SGI Altix • 4 simulations on NCSA’s Cobalt • 6 simulations on PSC’s Pople • Data analyzed by departmental resources and distributed memory machines • Indiana University’s Big Red and Quarry • Mississippi State University’s Raptor and Talon • Data visualization by departmental machines • Indiana University IDL license

  5. Using a Central Lustre File System

  6. Use Cases for Lustre WAN • Many cases where researcher needs resources outside a single data center • This is increasingly common in the TeraGrid • A user needs to use heterogeneous resources • Shared and Distributed Memory • GPUs • Visualization systems • A researcher needs to capture instrument data • A researcher needs to migrate to a different system • Ideally every user can access all his data from any resource all the time

  7. Thanks To • MSU Research • Trey Breckenridge • Roger Smith • Joey Jones • Vince Sanders • Greg Grimes • PSC • NCSAThis material is based upon work supported by the National Science Foundation under Grant No. CNS-0521433 • DC Team • Steve Simms • Josh Walgenbach • Justin Miller • Nathan Heald • Eric Isaacson • IU Research Technologies • Matt Link • Robert Henschel • Tom Johnson

More Related