1 / 39

Overview of SPEC HPG Benchmarks SPEC BOF SC2003

spec. spec. spec. spec. Overview of SPEC HPG Benchmarks SPEC BOF SC2003 Matthias Mueller High Performance Computing Center Stuttgart mueller@hlrs.de Kumaran Kalyanasundaram, G. Gaertner, W. Jones, R. Eigenmann, R. Lieberman, M. van Waveren, and B. Whitney SPEC High Performance Group.

leanne
Download Presentation

Overview of SPEC HPG Benchmarks SPEC BOF SC2003

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. spec spec spec spec Overview of SPEC HPG Benchmarks SPEC BOF SC2003 Matthias MuellerHigh Performance Computing Center Stuttgart mueller@hlrs.de Kumaran Kalyanasundaram, G. Gaertner, W. Jones, R. Eigenmann, R. Lieberman, M. van Waveren, and B. Whitney SPEC High Performance Group

  2. Outline • Some general remarks about benchmarks • Benchmarks currently produced by SPEC HPG: • OMP • HPC2002

  3. Where is SPEC Relative to Other Benchmarks ? There are many metrics, each one has its purpose Raw machine performance: Tflops Microbenchmarks: Stream Algorithmic benchmarks: Linpack Compact Apps/Kernels:NAS benchmarks Application Suites: SPEC User-specific applications: Custom benchmarks Computer Hardware Applications

  4. Why do we need benchmarks? • Identify problems: measure machine properties • Time evolution: verify that we make progress • Coverage:Help the vendors to have representative codes: • Increase competition by transparency • Drive future development (see SPEC CPU2000) • Relevance: Help the customers to choose the right computer

  5. Comparison of different benchmark classes

  6. SPEC OMP • Benchmark suite developed by SPEC HPG • Benchmark suite for performance testing of shared memory processor systems • Uses OpenMP versions of SPEC CPU2000 benchmarks • SPEC OMP mixes integer and FP in one suite • OMPM is focused on 4-way to 16-way systems • OMPL is targeting 32-way and larger systems

  7. SPEC HPC2002 Benchmark • Full Application benchmarks(including I/O) targeted at HPC platforms • Currently three applications: • SPECenv: weather forecast • SPECseis: seismic processing, used in the search for oil and gas • SPECchem: comp. chemistry, used in chemical and pharmaceutical industries (gamess) • Serial and parallel (OpenMP and/or MPI) • All codes include several data sizes

  8. Submitted Results

  9. Details of SPEC OMP

  10. SPEC OMP Applications Code Applications Language linesammp Molecular Dynamics C 13500 applu CFD, partial LU Fortran 4000 apsi Air pollution Fortran 7500 art Image Recognition\ neural networks C 1300 fma3d Crash simulation Fortran 60000 gafort Genetic algorithm Fortran 1500 galgel CFD, Galerkin FE Fortran 15300 equake Earthquake modeling C 1500 mgrid Multigrid solver Fortran 500 swim Shallow water modeling Fortran 400 wupwise Quantum chromodynamics Fortran 2200

  11. CPU2000 vs OMPL2001

  12. SPEC OMPL Results: Applications with scaling to 128

  13. SPEC OMPL Results: Superlinear scaling of applu

  14. SPEC OMPL Results: Applications with scaling to 64

  15. Details of SPEC HPC2002

  16. SPEC HPC2002 Benchmark • Full Application benchmarks(including I/O) targeted at HPC platforms • Currently three applications: • SPECenv: weather forecast • SPECseis: seismic processing, used in the search for oil and gas • SPECchem: comp. chemistry, used in chemical and pharmaceutical industries (gamess) • Serial and parallel (OpenMP and/or MPI) • All codes include several data sizes

  17. SPEC ENV 2002 • Based on the WRF weather model, a state-of-the-art, non-hydrostatic mesoscale weather model, see http://www.wrf-model.org • The WRF (Weather Research and Forecasting) Modeling System development project is a multi-year project being undertaken by several agencies. • Members of the WRF Scientific Board include representatives from EPA, FAA, NASA, NCAR, NOAA, NRL, USAF and several universities. • 25.000 lines of C and 145.000 lines of F90

  18. SPEC ENV2002 • Medium data set: SPECenvM2002 • 260x164x35 grid over Continental United States • 22km resolution • Full physics • I/O associated with startup and final result. • Simulates weather for a 24 hour period starting from Saturday, November 3nd, 2001 at 12:00 A.M. • SPECenvS2002 provided for benchmark researchers interested in smaller problems. • Test and Train data sets for porting and feedback. • The benchmark runs use restart files that are created after the model has run for several simulated hours. This ensures that cumulus and microphysics schemes are fully developed during the benchmark runs.

  19. SPEC HPC2002 Results: SPECenv scaling

  20. SPEC HPC2002 Results: SPECseis scaling

  21. SPEC HPC2002 Results: SPECchem scaling

  22. Hybrid Execution for SPECchem

  23. Current and Future Work of SPEC HPG • SPEC HPC: • Update of SPECchem • Improving portability, including tools • Larger datasets • New release of SPEC OMP: • Inclusion of alternative sources • Merge OMPM and OMPL on one CD

  24. Adoption of new benchmark codes • Remember that we need to drive the future development! • Updates and new codes are important to stay relevant • Possible candidates: • Should represent a type of computation that is regularly performed on HPC systems • We currently examine CPU2004 for candidates • Your applications are very welcome !!!Please contact SPEC HPG or me <mueller@hlrs.de> if you have a code for us.

  25. Conclusion and Summary • Results of OMPL and HPC2002: • Scalability of many programs to 128 CPUs • Larger data sets show better scalability • SPEC HPC will continue to update and improve the benchmark suites in order to be representative of the work you do with your applications!

  26. BACKUP

  27. What is SPEC? The Standard Performance Evaluation Corporation (SPEC) is a non-profit corporation formed to establish, maintain and endorse a standardized set of relevant benchmarks that can be applied to the newest generation of high-performance computers. SPEC develops suites of benchmarks and also reviews and publishes submitted results from our member organizations and other benchmark licensees. For more details see http://www.spec.org

  28. SPEC Members • Members:3DLabs * Advanced Micro Devices * Apple Computer, Inc. * ATI Research * Azul Systems, Inc. * BEA Systems * Borland * Bull S.A. * Dell * Electronic Data Systems * EMC * Encorus Technologies * Fujitsu Limited * Fujitsu Siemens * Fujitsu Technology Solutions * Hewlett-Packard * Hitachi Data Systems * IBM * Intel * ION Computer Systems * Johnson & Johnson * Microsoft * Mirapoint * Motorola * NEC - Japan * Network Appliance * Novell, Inc. * Nvidia * Openwave Systems * Oracle * Pramati Technologies * PROCOM Technology * SAP AG * SGI * Spinnaker Networks * Sun Microsystems * Sybase * Unisys * Veritas Software * Zeus Technology *

  29. SPEC HPG = SPEC High-Performance Group • Founded in 1994 • Mission: To establish, maintain, and endorse a suite of benchmarks that are representative of real-world high-performance computing applications. • SPEC/HPG includes members from both industry and academia. • Benchmark products: • SPEC OMP (OMPM2001, OMPL2001) • SPEC HPC2002 released at SC 2002

  30. Currently active SPEC HPG Members • Fujitsu • HP • IBM • Intel • SGI • SUN • UNISYS • University of Purdue • University of Stuttgart

  31. SPEC Members • Members:3DLabs * Advanced Micro Devices * Apple Computer, Inc. * ATI Research * Azul Systems, Inc. * BEA Systems * Borland * Bull S.A. * Dell * Electronic Data Systems * EMC * Encorus Technologies * Fujitsu Limited * Fujitsu Siemens * Fujitsu Technology Solutions * Hewlett-Packard * Hitachi Data Systems * IBM * Intel * ION Computer Systems * Johnson & Johnson * Microsoft * Mirapoint * Motorola * NEC - Japan * Network Appliance * Novell, Inc. * Nvidia * Openwave Systems * Oracle * Pramati Technologies * PROCOM Technology * SAP AG * SGI * Spinnaker Networks * Sun Microsystems * Sybase * Unisys * Veritas Software * Zeus Technology * • Associates:Argonne National Laboratory * CSC - Scientific Computing Ltd. * Cornell University * CSIRO * Defense Logistics Agency * Drexel University * Duke University * Fachhochschule Gelsenkirchen, University of Applied Sciences * Harvard University * JAIST * Leibniz Rechenzentrum - Germany * Los Alamos National Laboratory * Massey University, Albany * NASA Glenn Research Center * National University of Singapore * North Carolina State University * PC Cluster Consortium * Purdue University * Queen's University * Seoul National University * Stanford University * Technical University of Darmstadt * Tsinghua University * University of Aizu - Japan * University of California - Berkeley * University of Edinburgh * University of Georgia * University of Kentucky * University of Illinois - NCSA * University of Maryland * University of Miami * University of Modena * University of Nebraska - Lincoln * University of New Mexico * University of Pavia * University of Pisa * University of South Carolina * University of Stuttgart * University of Tsukuba * Villanova University * Yale University *

  32. CPU2000 vs. OMPM2001

  33. CPU2000 vs OMPL2001

  34. Program Memory Footprints

  35. SPEC ENV2002 – data generation • The WRF datasets used in SPEC ENV2002 are created using the WRF Standard Initialization (SI) software and standard sets of data used in numerical weather prediction. • The benchmark runs use restart files that are created after the model has run for several simulated hours. This ensures that cumulus and microphysics schemes are fully developed during the benchmark runs.

  36. SPECenv execution models on a Sun Fire 6800 Medium scales better OpenMP best for small size MPI best for medium size

  37. SPECseis execution models on a Sun Fire 6800 Medium scales better OpenMP scales better than MPI

  38. SPECchem execution models on a Sun Fire 6800 Medium shows better scalability MPI is better than OpenMP

  39. SPEC OMP Results • 75 submitted results for OMPM • 28 submitted results for OMPL

More Related