70 likes | 158 Views
Explore how Lustre-WAN facilitates distributed workflows in astrophysics research on planet formation, utilizing shared and distributed memory systems for simulations, analysis, and visualization. Thanks to collaborative efforts, researchers can access data seamlessly across resources.
E N D
Lustre-WAN: Enabling Distributed Workflows in Astrophysics Scott Michael scamicha@indiana.edu April 2011
Studying Planet Formation • My research group investigates the formation of giant planets • There are more than 500 extra-solar gas giants that have been discovered to date • Investigations are carried out using a 3D radiative hydrodynamics code • Each simulation takes several weeks and produces 5-10 TB of data • A series of 3-5 simulations is needed for each study • Data is analyzed to produce a variety of measurements
Our Workflow Before Using Lustre-WAN • Our workflow has three parts • Simulation – shared memory • Analysis – distributed memory • Visualization – proprietary software with interactivity • In the past we have transferred data between HPC resources, stored the data locally, and performed analysis and visualization
Using a Central Lustre File System • Using Indiana University’s Lustre based Data Capacitor WAN file system different systems in separate parts of the workflow can all access the same data • Data generated by SGI Altix • 4 simulations on NCSA’s Cobalt • 6 simulations on PSC’s Pople • Data analyzed by departmental resources and distributed memory machines • Indiana University’s Big Red and Quarry • Mississippi State University’s Raptor and Talon • Data visualization by departmental machines • Indiana University IDL license
Use Cases for Lustre WAN • Many cases where researcher needs resources outside a single data center • This is increasingly common in the TeraGrid • A user needs to use heterogeneous resources • Shared and Distributed Memory • GPUs • Visualization systems • A researcher needs to capture instrument data • A researcher needs to migrate to a different system • Ideally every user can access all his data from any resource all the time
Thanks To • MSU Research • Trey Breckenridge • Roger Smith • Joey Jones • Vince Sanders • Greg Grimes • PSC • NCSAThis material is based upon work supported by the National Science Foundation under Grant No. CNS-0521433 • DC Team • Steve Simms • Josh Walgenbach • Justin Miller • Nathan Heald • Eric Isaacson • IU Research Technologies • Matt Link • Robert Henschel • Tom Johnson