1 / 26

Escapades to Exascale

Escapades to Exascale. Tom Scogland , Balaji Subramaniam , Wu- chun Feng. The Ultimate Goal of “The Green500 List”. Raise awareness of energy efficiency in supercomputing. Drive energy efficiency as a first-order design constraint (on par with FLOPS).

chaney
Download Presentation

Escapades to Exascale

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Escapades to Exascale Tom Scogland, Balaji Subramaniam, Wu-chun Feng

  2. The Ultimate Goal of “The Green500List” • Raise awareness of energy efficiency in supercomputing. • Drive energy efficiency as a first-order design constraint (on par with FLOPS). Encourage fair use of the list rankings to promote energy efficiency in high-performance computing systems.

  3. Agenda • Green Supercomputing: A Brief History • History • Evolution • Green500 Trends • Projections Toward Exascale • Setting Trends for Energy-Efficient Supercomputing • Methodologies • Metrics • Workloads • Conclusions

  4. Brief History:From Green Destiny to The Green500List • 2/2002: Green Destiny (http://sss.lanl.gov/ → http://sss.cs.vt.edu/) • “Honey, I Shrunk the Beowulf!” 31st Int’l Conf. on Parallel Processing, August 2002. • 4/2005: Workshop on High-Performance, Power-Aware Computing • Keynote address generates initial discussion for Green500 List • 4/2006 and 9/2006: Making a Case for a Green500 List • Workshop on High-Performance, Power-Aware Computing • Jack Dongarra’s CCGSC Workshop “The Final Push” • 11/2006: Launch of Green500 Web Site and RFC • http://www.green500.org/Generates feedback from hundreds • 11/2007: First Green500 List Officially Released

  5. Evolution of The Green500 • 11/2009: Experimental Lists Created • Little Green500: More focus on LINPACK energy efficiency than on LINPACK performance in order to foster innovation • HPCC Green500: Alternative workload (i.e., HPC Challenge benchmarks) to evaluate energy efficiency • Open Green500: Enabling of alternative innovative approaches for LINPACK to improve performance and energy efficiency, e.g., mixed precision • 11/2010: First Green500 Official Run Rules Released • 11/2010: Open Green500 Merged into Little Green500 • 6/2011: Collaborations Begin on Methodologies for Measurement of Supercomputer Energy Efficiency

  6. Agenda • Green Supercomputing: A Brief History • History • Evolution • Green500 Trends • Projections Toward Exascale • Setting Trends for Energy-Efficient Supercomputing • Methodologies • Metrics • Workloads • Conclusions

  7. Trends: How Energy Efficient Are We?

  8. Trends: What About Power?

  9. Trends: Energy vs Performance Efficiency MFLOPS/Watt % of peak MFLOPS GPU Cell

  10. Trends in Feature Size Average minimum feature size in nanometers

  11. Agenda • Green Supercomputing: A Brief History • History • Evolution • Green500 Trends • Projections Toward Exascale • Setting Trends for Energy-Efficient Supercomputing • Methodologies • Metrics • Workloads • Conclusions

  12. ExaScale Computing Study: Technology Challenges in Achieving Exascale Systems P. Kogge, K. Bergman, S. Borkar, and D. Campbell, “Exascalecomputing study: Technology challenges in achieving exascale systems,” 2008. • Goal • “Because of the difficulty of achieving such physical constraints, the study was permitted to assume some growth, perhaps a factor of 2X, to something with a maximum limit of 500 racks and 20 MW for the computational part of the 2015 system.” • Realistic Projection? • “Assuming that Linpack performance will continue to be of at least passing significance to real Exascale applications, and that technology advances in fact proceed as they did in the last decade (both of which have been shown here to be of dubious validity), then […] an Exaflop per second system is possible at around 67 MW.”

  13. Trends: From 2009 to 2011 and Beyond Exascale in 100 MW Exascale in 20 MW K-Computer Blue Gene/Q

  14. Trends: Extrapolating to Exaflop K-Computer  1.2 GW BG/Q  493 MW

  15. Agenda • Green Supercomputing: A Brief History • History • Evolution • Green500 Trends • Projections Toward Exascale • Setting Trends for Energy-Efficient Supercomputing • Methodologies • Metrics • Workloads • Conclusions

  16. Setting Trends for Energy-Efficient Supercomputing • Collaboration between EE HPC WG, Green Grid, TOP500, and Green500 • Research, evaluation, and convergence on • Methodologies: How do we measure power/energy? • Metrics: How do we combine benchmark scores and energy to determine efficiency? • Workload: What benchmark do we use?

  17. Methodologies • How to measure the power/energy of a supercomputer? • The simple answer: • Connect a power meter and read it. • The issue: • Where and how to connect the power meter and for how long? • We are collaborating with the TOP500, Green Grid, and the EE HPC WG to establish a standard method to rate measurement quality based on the answer to these questions.

  18. Methodologies

  19. Methodologies: LINPACK Phases

  20. Metrics: MFLOPS/Watt • MFLOPS/Watt: The metric du jour, but should it be? • Nominally includes only floating-point performance • LINPACK and FFT are two common benchmarks to measure the MFLOPS portion • Alternatives • A different baseline, i.e., updates/s or graph traversals/s • Combine multiple benchmarks, i.e., suites like HPCC • The Lingering Question • How do we rank computers with more than one score?

  21. Metrics: The Green Index • Potential Solution • Combine results into a single metric • The Green Index (TGI) • A weighted sum of improvement in energy efficiency over a reference system for a set of benchmarks • Benefits • Benchmark suite agnostic, could shift benchmarks in suite over time • More complete picture of the energy efficiency of a system overall • Issues • A submission must consist of multiple benchmark runs • What should the weights be?

  22. Metrics: TGI Example Final TGI for test system: 20.1

  23. Workloads • What benchmarks should we use? • LINPACK is the current, but contested choice • Alternatives? • SPEC Power? • HPCC? • Neither are really appropriate • Green Benchmark Suite (GBench) • Suite of benchmarks for evaluating energy efficiency • Include benchmarks evaluating subsystems most critical for scientific computing • Start with high-performance LINPACK (HPL) – the de facto benchmark in high-performance computing. • Why do we always test systems at full load?

  24. Workloads: LV-LINPACK • Computers typically hit best efficiency at 100% load • How many real workloads hit 100%? • Load-Varying LINPACK (LV-LINPACK) • Part of GBench • Tests the system under different levels of workload • Showcases the energy proportionality of a system

  25. Conclusions • We believe that thus-far we have been successful in raising awareness of energy efficiency in the consciousness of the supercomputing community. • Energy efficiency is the new performance for huge systems, but most supercomputers continue to lag. • Continuing efforts into new Metrics, Methodologies and Workloads are beginning to bear fruit. • Exascale computing in 100 or even 67 MW appears feasible if our current trend in energy efficiency continues. • 20MW on the other hand, will require a major outlier

  26. Questions?

More Related