1 / 11

Scientific Computing Group

Scientific Computing Group. Ricky A. Kendall Scientific Computing Center for Computational Sciences. Center for Computational Sciences/ Leadership Computing Facility A. Bland, Director L. Gregg, Division Secretary. Operations Council A. Stewart, Quality Assurance T. Jones, Cyber Security

blaze
Download Presentation

Scientific Computing Group

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Scientific Computing Group Ricky A. Kendall Scientific Computing Center for Computational Sciences

  2. Center for Computational Sciences/ Leadership Computing Facility A. Bland, Director L. Gregg, Division Secretary Operations Council A. Stewart, Quality Assurance T. Jones, Cyber Security M. Dobbs, Facility Mgmt. W. McCrosky, Finance Officer W. Besancenez, Procurement M. Palermo, HR Mgr. K. Johnson, Recruiting R. Toedte, Safety & Health N. Wright, Org. Mgmt. Specialist Advisory Committee J. Bernholc S. Karin T. Dunning D. Keyes J. Dongarra D. Reed K. Droegemeier T. Sterling J. Hack W. Washington LCF System Architect S. Poole Chief Technology Officer A. Geist Deputy Project Director K. Boudwin Director for Science D. Kothe Director of Operations D. Kothe* Scientific Computing R. Kendall L. Rael S. Ahern## V. Lynch5 S. Alam5 B. Messer E. Apra5 R. Mills R. Barrett5 G.Ostrouchov5 R. Barretto2 R. Sankaran D. Banks3 A. Tharrington J. Daniel R. Toedte M. Fahey T. White S. Klasky# J. Kuehn5 #End-to-End Solutions Lead ## Viz Task Lead Cray Supercomputing Center for Excellence J. Levesque T. Darland L. DeRose4 D.Kiefer4 J. Larkin4 N. Wichmann4 User Assistance and Outreach J. White L. Rael Y. Ding D. Frederick C. Fuson C. Halloy3 S. Hempfling M. Henley J. Hines S. Jones6 B. Renaud B. Whitten L. Williams5 K. Wong3 High-Performance Computing Operations A. Baker L. Gregg* M. Bast J. Lothian J. Becklehimer4 D. Maxwell## J. Breazeale6 M. McNamara4 J. Brown6 J. Miller6 A. Enger4 G. Phipps, Jr.6 C. EnglandG. Pike J. Evanko4V. Rothe# M. Griffith S. Shpanskiy V. Hazlewood5D. Vasil T. Jones C. Willis4 C. Leach6 T. Wilson6 D. Londo4 S. White #Team Lead, Computer Ops ##Technical Coordinator Technology Integration S. Canon L. Gregg* T. Barron S. Carter## K. Matney M. Minich S. Oral D. Steinert F. Wang V. White W. Yu5 ## Networking Lead Site Preparation K. Dempsey Hardware Acquisition A. Bland* Test and Acceptance Development S. Canon Commissioning A. Baker Project Management D. Hudson5 Project R & D A. Geist Cray Proj. Director K. Kafka4 1Student 2Post Graduate 3JICS 4Cray, Inc. 5Matrixed 6Subcontract *Interim New position within ORNL New hire at ORNL

  3. Scientific computingTasks and deliverables • Project partnerships in science, visualization, workflow, and data analytics • Level of integration is dependent upon the project • High-level user assistance • Main staff function. Whatever it takes! • Algorithmic development at scale • For scientific codes, visualization and workflow • Application and library code optimization and scaling • How to use the LCF resources for science • User voice within LCF planning exercises • Users are the focus • Requirements gathering, documentation • What’s next for our users and our center • Exploiting parallel I/O & other technologies in applications • Specific focus for targeted applications • Acquiring & deploying viz and end-to-end systems • Infrastructure building • Benchmarking and evaluation exercises with Vendor interaction • Learning how to best use current and new LCF resources

  4. Partnership with projects on LCF resources Shared R&D Staff Level of integration Code Design and algorithmic developments @ scale Porting/Tuning Library utilization Compiler flags Optimizations Choice of Tools Pilot End Stations Projects

  5. High-level user assistance Project Compute Visualization Liaison End-to-End Whatever it Takes!

  6. Algorithmic and programming model development at scale • Complex Tradeoffs for applications • Usually among storage, computation and communication • Redesign new methods • Fixed grid to AMR grids • Programming model changes • Message passing to one-sided model transformation

  7. 8192 7168 6144 5120 4096 3072 2048 PETSc 1024 0 0 1024 2048 3072 4096 5120 6144 7168 8192 HDF5 Application Application and library optimization and scaling • Applications for the LCF systems • Catamount issues for Apps • Dual Core/Single core settings • Compiler flags for optimal performance (Application/Library specific) • Modifications for scaling • Libraries Built and kept up-to-date • HDF5, PETSc, FFTW, etc. Ideal After Before Number of Cores

  8. Requirements gathering and documentation • Participate in Applications Requirements Council (ARC) • Determine requirements for applications on the machine • Algorithmic • Programming models • Architectural • Infrastructure • Develop ties to applications areas not currently at scale on the machine • Work with developers to understand application properties • Memory footprint • Algorithmic scalability • Potential for future resources • Collela’s 7 Dwarfs Anaylsis • The 7 algorithm types are scattered broadly among science domains, with no one particular algorithm being ubiquitous and no one algorithm going unused; • Structured grids and dense linear algebra algorithms are the most widely used algorithms (by over half of the representative codes), hence system attributes such as node peak flops and memory capacity, memory latency, and interconnect latency will be important; • Particle-based and Monte Carlo algorithms, which have similar properties from a system standpoint, are also broadly used, and can tax interconnect latency and in some cases node memory capacity, depending upon implementation and usage.

  9. Collela’s 7 Dwarfs analysis

  10. Highest Medium Lowest System attribute priority per science domain

  11. Contact Ricky A. Kendall Scientific Computing Center for Computational Sciences (865) 576-6905 kendallra@ornl.gov 11 Presenter_date

More Related