1 / 15

TeraGrid Advanced User Support (AUS)

TeraGrid Advanced User Support (AUS). Amit Majumdar, SDSC Area Director – AUS TeraGrid Annual Review April 6-7, 2009. Outline of This Short Talk. Overview of TeraGrid-wide AUS Three Sub-Areas Under AUS Advanced Support for TeraGrid Applications (ASTA) Advanced Support Projects (ASP)

jeslyn
Download Presentation

TeraGrid Advanced User Support (AUS)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. TeraGrid Advanced User Support(AUS) Amit Majumdar, SDSC Area Director – AUS TeraGrid Annual Review April 6-7, 2009

  2. Outline of This Short Talk • Overview of TeraGrid-wide AUS • Three Sub-Areas Under AUS • Advanced Support for TeraGrid Applications (ASTA) • Advanced Support Projects (ASP) • Advanced Support EOT (ASEOT) • Benefit of TeraGrid-wide AUS • Management and Operation of AUS • Focus of PY05 Plan

  3. Overview of TeraGrid-Wide AUS • TeraGrid-wide AUS Area initiated in August 2008 • Until then effort was mostly RP based • AUS staff are expert computational scientists (~90% Ph.D) • TG-wide AUS allows global management/coordination • Provide the best mix/match of staff (expertise) for users • Allows close interaction with DV, SGW, EOT, US, UFC • The report/presentation consists of pre-August RP based and post-August TG-wide AUS • TRAC allocations-based Advanced Support for TeraGrid Applications (ASTA) since early 2007

  4. Three Sub-Areas Under AUS • Advanced Support for TeraGrid Applications (ASTA) • Advanced Support Projects (ASP) • Advanced Support EOT (ASEOT)

  5. AUS - ASTA • Users request ASTA as part of quarterly TRAC • Reviewed by TRAC and recommendation score provided • Scores taken into account to select ASTA projects • Other criterion • Match of AUS staff expertise for ASTA • RP site(s) where PI has allocation • ASTA work plan • Interest of PI, PI team • Additional mechanisms for ASTA in Sept 2008: • Startup and Supplemental ASTA • The scope of a regular US activity elevated to ASTA • PI requests ASTA at the next TRAC (AUS continues uninterrupted)

  6. Number of ASTAs Started per Quarter - Trend • Currently total of ~40 active ASTAs from last three TRACs, Startup/Supplemental, continuing from 2008 TeraGrid-wide AUS started

  7. ASTAs – Significant Deep&Broad Impact • ASTAs breaking records • Astronomy grid size, AMR depth • Earthquake simulations grid size, 1 Hz, 100K core • MD simulations time scales, larger QM region • DNS turbulence grid size (world record grid size benchmark) • ASTAs other impacts • Million serial job submission • Workflow – data analysis • Math library analysis • Detail visualization • SHMEM version of MPI code • Innovative resource use • 15 ASTAs contributed to Science Highlights • Joint paper between ASTA staff and PI’s team in journals, conferences, SC08 best poster etc. • Joint proposal between ASTA staff and PI – several PetaApps and other NSF proposal

  8. ASTA Example - CFD • PI: Yeung, Georgia Tech • World’s largest DNS turbulence simulation – first ever 81923 benchmarks on Ranger and Kraken • Collaborative ASTA from three different RP sites with complementary expertise • FFT performance • MPI collective scaling • Performance analysis • Visualization The intense enstrophy (red iso-contours) in long but thin tubes surrounded by large energy dissipation (blue/green volume rendering). Image courtesy of Kelly Gaither (TACC) , Diego Donzis (U. Maryland).

  9. ASTA Example - MD • PI: Roitberg, U. Florida – focuses on combined QM/MM simulations of biological enzymes • Implemented high performance, parallel QM/MM within AMBER • Semi-empirical QM/MM runs 10-25times faster • Significantly larger QM region can be studied in detail • Resulted in progress towards treatment of Chagas’ disease • Work released through AMBER – impacts many TG, non-TG users;work transferable to CHARMM A snapshot from a QM/MM simulation of the protein Nitrophorin-2 expressed in the triatomeRhodniusprolixus, a vector for Trypanosomacruzi, and a key target for Chagas’ disease inhibitors Image: A.E. Roitberg,G. Seabra (UFL), J. Torras (UPC-Spain), R.C. Walker (SDSC). • 2009 ASTA effort: • Improve the serial and parallel efficiency of the DFTB QM/MM • Improve scaling to further accelerate the SCF convergence in QM calculations

  10. AUS - ASP • Foundation work: installation of domain science, HPC specific software, associated optimization on machines, debugging, training, and interaction with users • Projects with opportunity of broad impact • Identified, with input from users, by AUS staff, other TG WGs • Example and potential projects • Benchmark widely used community Molecular Dynamics (NAMD, AMBER, other) and Materials Science (CPMD, VASP, other) applications on TeraGrid machines and provide well documented information • Beneficial to both users and TRAC reviewers • Provide usage scenario-based, technical documentation on effective use of profiling, tracing tools on TeraGrid machines • Analyze, benchmark hybrid and multi-core programming techniques • Document exemplary ways of using TG resources from ASTA projects, science highlights etc.

  11. Molecular Dynamics Benchmark AUS - Advanced Support Project

  12. AUS - ASEOT • Contribute to and deliver advanced HPC/CI topics in training, tutorial, workshops • AUS outreach to the user community • Organize/participate in workshops • FT workshop with BlueWaters, DOE labs, vendors • Special MD-HPC session in ACS • Work with XS-WG, and EOT to create a venue for PetaScale users, SDCI-based tools developers • Interaction with other (e.g. DataNet, iPlant) NSF funded projects to understand/anticipate their CI needs

  13. Benefit of TeraGrid-wide AUS • Rare pool of excellent staff for NSF - expertise and experience in HPC, domain science, architectures • Coordinated TeraGrid-wide AUS – mix/match of expertise from different RP sites • Molecular Dynamics expert HPC perf tools expert • FFT expert data (HDF5) expert • Parallel I/O expert visualization expert • PETSc expert from a RP another RP site’s machine • Innovative ASP – significant user impacts • e.g. AMBER and NAMD experts collaborating on code scaling • e.g. IPM and TAU experts collaborating on profiling, tuning • Complementary advanced training opportunities (at TG09, possibly SC09)

  14. Management and Operation of AUS • By a team of AUS POCs from participating RP sites • Bi-weekly telecon for AUS projects, matching of AUS staff, contributing to reports etc. • Bi-weekly technical tele/webcon among AUS staff • Technical presentation on ASTA, ASP by AUS technical staff • Technical insight gained by all AUS staff on ASTAs, issues, resources • AUS staff expertise and research interest shared • Collaborative environment for AUS staff for current/future projects • Other ASP specific telecons as needed • All presentations, meeting minutes available in Wiki

  15. Focus of PY05 Plan • ASTA - build on experience from PY04 ASTAs • Utilize team expertise from PY04 • Seek out new fields (economics, social science) • Thorough work plan with PIs • Utilize PY04 ASTA work • e.g.VASP optimized on machine X – impacts all current/future users • ASP • Petascale, Data, Viz ASP - possibly jointly with DV, SGW • User survey provides ideas • EOT • Continue outreach to DataNet, iPlant • ASTA for under-represented in HPC/CI - with EOT/Pathways

More Related