1 / 34

GridBench: A Tool for Benchmarking Grids

George Tsouloupas - University of Cyprus. Micro-benchmarks -- isolate basic performance ... George Tsouloupas - University of Cyprus. GridBench: A Tool for Benchmarking Grids. ...

mike_john
Download Presentation

GridBench: A Tool for Benchmarking Grids

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. GridBench: A Tool for Benchmarking Grids

    George Tsouloupas Marios Dikaiakos High Performance Computing Lab University of Cyprus {georget,mdd}@ucy.ac.cy http://grid.ucy.ac.cy

    2. Overview

    Benchmarking, Challenges and Users Related Work Our approach to performance evaluation GridBench Architecture and Metadata The Tool Interface Results Work in progress Conclusion

    3. Challenges

    Heterogeneous system Hardware, Software and Configuration Non-exclusive use of resources Dynamic environment Distinct administrative domains Find resources, execute benchmark, collect and interpret results In short: too many variables.

    4. Benchmark Users

    End-users: Need to know the capabilities of resources when running similar codes. Developers: Develop and tune applications Compare: job submission services, resource allocation policies, scheduling algorithms

    5. Benchmark Users

    Architects/Administrators: Improve system design Detect faults and misconfigurations (indicated by unexpected results) Compare implementations/systems Researchers: Benchmarks can give insight into how Grids work and perform Could provide better understanding the nature of grids in general.

    6. Related Work

    Probes “Benchmark Probes for Grid Assessment” [Chun et al. 2003] Grid Benchmarking Research Group (Global Grid Forum): (CIGB etc.)Specification Version 1.0 [Wijngaart and Frumkin] “Benchmarks for Grid Computing” [Snavely et al. 2003] GridBench (CrossGRID - WP2) Prototype Documentation for GridBench version 1.0, 2003. Software Requirements Specification, version 1.1 for GridBench , 2002. “GridBench: A Tool for Benchmarking the Grid” [Tsouloupas and Dikaiakos 2003]

    7. Our requirements of a Grid benchmarking tool.

    Make it easy to conduct experiments Allow the measurement of different aspects of the system's performance (Micro-benchmarks,probes ,application benchmarks) Should maintain a history of measurements. Accomodate retrieval and comparizon of results Collect monitoring information to help with result interpretation.

    8. Grid Infrastructure Architecture

    Computing Element Storage Element Worker Node Worker Node Worker Node Worker Node Worker Node Computing Element Storage Element Worker Node Worker Node Worker Node Worker Node Computing Element Worker Node Worker Node Worker Node Worker Node Worker Node Central Services (VO, Resource Broker, etc.) Wide Area Network Site Site Site Virtual Organization

    Individual Resources (cluster nodes, Storage Elements) Sites (clusters, SMPs) Grid Constellation (multiple sites, Vos)

    9. A layered approach to benchmarking Grids: “Infrastructure viewpoint”

    Layers: (Individual Resources, Sites, Constellations) Conjecture: Layered approach provides a more complete perspective on the system under study.

    10. Micro-benchmarks -- isolate basic performance characteristics Micro-kernel Benchmarks -- synthetic codes Application Benchmarks -- derived from real applications

    A layered approach to benchmarking Grids: “Software viewpoint”

    11. GridBench: A Tool for Benchmarking Grids.

    Provides a simple scheme for specifying benchmark executions. Provides a set of Benchmarks to characterize Grids at several levels. Provides mechanisms for executing benchmarks and collecting the results. Archives benchmark specifications and results for comparison and interpretation. Provides simple result management tools. Provides a user interface for the above

    12. Software Architecture: Perspective

    13. Software Architecture

    14. GridBench Meta-data

    GridBench Definition Language (XML-based) Definition and results co-exist (in archive) in the same structure. Intermediate form that allows for easy transformation to different job desctiption formats metric valueVector benchmark Component monitor parameter location resource constraint corequisite/ prerequisite metric monitor valueVector parameter 0..* 0..* 0..* 0..* 0..* 0..* 0..* 1..* 0..* 1..* 0..* 1..* 1..* parameter 0..*

    15. GridBench Definition Language example

    16. Archival/Publication of Results

    Earlier versions Published to local MDS: easy access by users and schedulers Recent Versions Benchmark results are archived in a native XML database (Apache Xindice) Flexibility. Allows for statistical analysis of results The Benchmark results are associated with GBDL definition. -- Results are meaningless without the specific parameters Monitoring data. -- Comprehension/Analysis of results is enhanced when combined with monitoring data.

    17. The Tool

    1-Pick a benchmark 2- Configure it The Definition Interface

    18. The Generated GBDL

    19. The Browsing Interface

    List of Benchmark Executions Query Metrics from Selected Executions Metrics

    20. Result management tools

    Metrics from Selected Executions Drag &Drop (Can be used to compare Similar metrics)

    21. EPStream submitted to three Computing Elements

    Two EPStream submissions to cluster.ui.sav.sk Results from EPStream (screenshots) Different Colors represent different Worker Nodes Measures Memory Bandwidth in MB/s

    22. MPPTest (blocking) submitted to three Computing Elements

    Three MPPTest submissions to apelatis.grid.ucy.ac.cy using 2 and 4 nodes. Results from MPPTest

    23. Nine HPL executions on cluster.ui.sav.sk using various parameters and number of nodes

    Results from High Performance Linpack

    24. Summary

    Layered approach to Grid performance evaluation GridBench Definition Language Definition of how and where benchmarks will run. Automatic generation of job descriptions. Utility components Ease execution, and collection of results Result management GUI tool for running/browsing Easy execution on grid resources Initial set of results

    25. Conclusion

    The mechanism/meta-data for defining and executing the benchmarks makes it very easy to take measurements. XML Storage of definitions and results proved rather complicated to query, but quite flexible. The tool prototype is in place, being tested, has provided some initial results, and is ready for the next revision. Porting Benchmarks to the Grid not as straight-forward as anticipated (heterogeneity of resources, configuration, libraries) Benchmarks are a great tool for detecting flaws in hardware, software and configuration.

    26. Work-In-Progress

    Complete GBDL specification, possibly building upon the work of the GGF Job Submission Definition Language work-group Implementation of more benchmarks focusing on Application-based benchmarks (CrossGrid and other applications) Integaration with Monitoring tools Result interpretation and tools to assist interpretation. Future Work

    27. Acknowledgments

    Funded by Part of In cooperation with Many thanks to Dr. Pedro Trancoso University of Cyprus.

    28. Questions, Comments, Suggestions.

    http://grid.ucy.ac.cy Thank you.

    29. Additional Slides

    30. Translation to JDL/RSL

    XML-based GBDL to Job Description Support for simple jobs can be through the use of simple templates. (executable, parameters and locations are transformed to simple RSL/JDL) Most benchmarks need special command-line parameter formatting, or parameter files. Translator Param Handler GBDL JDL RSL ...

    31. Translation Example: GBDL to RSL

    RSL +(&(resourceManagerContact=“ce1.grid.ucy") (label="subjob 0") (environment= (GLOBUS_DUROC_SUBJOB_INDEX 0)) (count=2) (arguments="-n 1000") (executable="/bin/myexec" )) (&(resourceManagerContact="ce2.grid.ucy") (label="subjob 1") (environment= (GLOBUS_DUROC_SUBJOB_INDEX 1)) (count=2) (arguments="-n 1000") (executable="/bin/myexec" ))

    32. The Benchmark Suite

    Micro-benchmarks at the Worker-node level EPWhetstone: embarrassingly parallel adaptation of the serial whetstone benchmark. EPStream: adaptation of the Stream benchmark. BlasBench: evaluate serial performance of the BLAS routines. Micro-benchmarks at the Site level Bonnie++: Storage I/O performance MPPTest: MPI performance measurements Micro-benchmarks at the VO level MPPTest: MPI performance measurements (spanning sites) gb_ftb: File Transfer Benchmark Micro-kernels at the Site level High-Performance Linpack Selected kernels from the NAS Parallel Benchmarks Micro-kernels at the VO level Computationally Intensive Grid Benchmarks

    33. The Benchmark Suite (cont'd)

    Application-Kernel benchmarks at the site level CrossGrid application-kernels Application-Kernel benchmarks at the VO level CrossGrid Applications

    34. EPWhetstone submitted to two Computing Elements

    Three EPWhetstone submissions to apelatis.grid.ucy.ac.cy Results from EPWhetstone Different Colors represent different Worker Nodes Measures “Whetstone MIPS”

More Related