1 / 14

State of TeraGrid in Brief

State of TeraGrid in Brief. John Towns TeraGrid Forum Chair Director of Persistent Infrastructure National Center for Supercomputing Applications University of Illinois jtowns@ncsa.illinois.edu. TeraGrid Objectives. DEEP Science: Enabling Terascale and Petascale Science

xiu
Download Presentation

State of TeraGrid in Brief

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. State of TeraGrid in Brief John Towns TeraGrid Forum Chair Director of Persistent Infrastructure National Center for Supercomputing Applications University of Illinois jtowns@ncsa.illinois.edu

  2. TeraGrid Objectives • DEEP Science: Enabling Terascale and Petascale Science • make science more productive through an integrated set of very-high capability resources • address key challenges prioritized by users • WIDE Impact: Empowering Communities • bring TeraGrid capabilities to the broad science community • partner with science community leaders - “Science Gateways” • OPEN Infrastructure, OPEN Partnership • provide a coordinated, general purpose, reliable set of services and resources • partner with campuses and facilities

  3. What is the TeraGrid? • An instrument that delivers high-end IT resources/services: computation, storage, visualization, and data/services • a computational facility – over two PFlop in parallel computing capability • collection of Science Gateways – provides a new idiom for access to HPC resources via discipline-specific web-portal front-ends • a data storage and management facility – over 20 PetaBytes of storage (disk and tape), over 100 scientific data collections • a high-bandwidth national data network • A service: help desk and consulting, Advanced Support for TeraGrid Applications (ASTA), education and training events and resources • Available freely to research and education projects with a US lead • research accounts allocated via peer review • Startup and Education accounts automatic

  4. 11 Resource Providers, One Facility UW Grid Infrastructure Group (UChicago) UC/ANL PSC NCAR PU NCSA Caltech UNC/RENCI IU ORNL USC/ISI NICS SDSC LONI TACC Resource Provider (RP) Software Integration Partner Network Hub

  5. How is TeraGrid Organized? • TG is set up like a large cooperative research group • evolved from many years of collaborative arrangements between the centers • still evolving! • Federation of 12 awards • Resource Providers (RPs) • provide the computing, storage, and visualization resources • Grid Infrastructure Group (GIG) • central planning, reporting, coordination, facilitation, and management group • Leadership provided by the TeraGrid Forum • made up of the PI’s from each RP and the GIG • led by the TG Forum Chair, who is responsible for coordinating the group (elected position) • John Towns – TG Forum Chair • responsible for the strategic decision making that affects the collaboration • Day-to-Day Functioning via Working Groups (WGs): • each WG under a GIG Area Director (AD), includes RP representatives and/or users, and focuses on a targeted area of TeraGrid

  6. TeraGrid Resources and Services • Computing: ~2 PFlops aggregate • more than two PFlops of computing power today and growing • Ranger: 579 Tflop Sun Constellation resource at TACC • Kraken: 1.03 Pflop Cray XT5 NICS/UTK • Remote visualization servers and software • Spur: 128 core, 32 GPU cluster connected to Ranger’s interconnect • Longhorn: 2048 core, 512 GPU cluster directly connected to Ranger’s parallel file system • Nautilus: 1024 core, 16 GPU, 4 TB SMP directly connected to parallel file system shared with Kraken • Data • allocation of data storage facilities • over 100 Scientific Data Collections • Central allocations process • single process to request access to (nearly) all TG resources/services • Core/Central services • documentation • User Portal • EOT program • Coordinated technical support • central point of contact for support of all systems • Advanced Support for TeraGrid Applications (ASTA) • education and training events and resources • over 30 Science Gateways

  7. Resources Evolving • Recent and anticipated resources • Track 2D awards • Dash/Gordon (SDSC), Keeneland (GaTech), FutureGrid (Indiana) • XD Visualization and Data Analysis Resources • Spur (TACC), Nautilus (UTK) • “NSF DCL”-funded resources • PSC, NICS/UTK, TACC, SDSC • Other • Ember (NCSA) • Continuing resources • Ranger, Kraken • Retiring resources • most other resources in TeraGrid today will retire in 2011 • Attend BoFs for more on this: • New Compute Systems in the TeraGrid Pipeline(Part 1) • Tuesday, 5:30-:700pm in Woodlawn I • New Compute Systems in the TeraGrid Pipeline(Part 2) • Wednesday, 5:15-6:45pm in Stoops Ferry

  8. Impacting Many Agencies(CY2008 data) Supported Research Funding by Agency Resource Usage by Agency University Industry 1% 1% International 3% University Other Industry DOD Other International 2% 1% 6% 5% 2% 0% DOD 1% NASA NASA NSF 9% 10% 49% NIH NSF NIH 15% 19% NSF DOE 52% NIH DOE NASA 11% DOD DOE International 13% University $91.5M Direct Support of Funded Research 10B NUs Delivered Other Industry

  9. Across a Range of Disciplines 19 Others Advanced Earth Sciences 4% Scientific 5% Computing 6% Physics Materials 26% Research 6% Chemical, Thermal Systems 6% Chemistry Molecular 7% Biosciences 18% Atmospheric Sciences Astronomical 8% Sciences 14% >27B NUs Delivered in 2009

  10. Ongoing Impact • More the 1,200 projects supported • 54 examples highlighted in most recent TG Annual Report • atmospheric sciences, biochemistry and molecular structure/function, biology, biophysics, chemistry, computational epidemiology, environmental biology, earth sciences, materials research, advanced scientific computing, astronomical sciences, computational mathematics, computer and computation research, global atmospheric research, molecular and cellular biosciences, nanoelectronics, neurosciences and pathology, oceanography, physical chemistry • 2009 TeraGrid Science and Engineering Highlights • 16 focused stories • http://tinyurl.com/TeraGridSciHi2009-pdf • 2009 EOT Highlights • 12 focused stories • http://tinyurl.com/TeraGridEOT2009-pdf

  11. Continued Growth in TeraGrid • 20% increase in active user community • over 4,800 active users annually • in past year (Sep ‘09), added 10,000th new user since beginning of project • 255% growth in delivered compute resources • more than 27B NUs delivered in past year • Over 45 ASTA projects in progress currently • each quarterly TRAC gets about 15 ASTA requests • New phylogenetics science gateway (CIPRES, www.phylo.org) has more researchers running jobs than all gateways combined in 2009 • cited in 35+ publications • 3× times initial projected usage; jobs from institutions in 17 EPSCOR states • Campus Champions Program continues as a very successful and growing outreach activity • now with 91 Champions, up from ~60 last year • 50 are here at TG’10! • Very successful student program for TG’xy • initiated at TG’09 with ~130 students • continued at TG’10 with ~100 students

  12. Continued Deployment of New Capabilities • Ongoing deployment of new compute and data resources • referred to earlier and more information at BoFs • Completed deployment of advanced scheduling capabilities • metascheduling, co-scheduling, and on-demand scheduling • Expanded deployment of globally-accessible file systems • Data Capacitor provides ~700 TB to most current production TG sites • second Lustre-based wide area file system in development • TeraGrid joined the InCommon federation • deployed a prototype service allowing users at 26 of the 171 participating universities to access TeraGrid using their local campus credentials

  13. TeraGrid: a project in transition • Currently in a one year extension of project • start of XD for CMS/AUSS/TEOS deferred for one year (April 2011) • TeraGrid Extension funded to bridge to XD program • 12-month funding to support most GIG functions and some non-Track 2 RP resources • still some uncertainty in sequence/timing of events • All activity areas have effort reserved for TeraGrid → XD transition as appropriate • planned period for transition: April-July 2011 • transition issues exist for nearly all areas • Continued support of users during transition is our highest priority • More information on this tomorrow morning: • “The Transition from TeraGrid to XD” • 8:30am Wednesday in Grand Station room

  14. Questions?

More Related