1 / 15

Performance Evaluation and Analysis Consortium (PEAC) End Station

Performance Evaluation and Analysis Consortium (PEAC) End Station. Patrick H. Worley Computational Earth Sciences Group Computer Science and Mathematics Division. Overview.

howell
Download Presentation

Performance Evaluation and Analysis Consortium (PEAC) End Station

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Performance Evaluation and Analysis Consortium (PEAC) End Station Patrick H. Worley Computational Earth Sciences GroupComputer Science and Mathematics Division

  2. Overview The PEAC End Station provides the performance evaluation and performance tool developer communities access to the Leadership Computing Facility (LCF) systems.

  3. Overview (continued) All of this must be accomplished while adhering to the “Golden Rules” of the performance community: • Low visibility (no production runs!) • Open and fair evaluations • Timely reporting of results

  4. Status as of 8/28/07 Consuming: Contributing to: 32 active users,39 active projects: • 13 application performance analysis and optimization • 8 system evaluation • 8 tool development • 6 infrastructure development • 4 application modeling • XT4: 1,168,000 processor hours (exceeding 1,000,000 processor-hour allocation) • 1 refereed journal paper • 1 invited journal paper • 6 refereed proceedings papers • 10 proceedings papers • 2 book chapters • Numerous oral presentations

  5. System evaluation

  6. Performance tools

  7. Application performance analysis and optimization

  8. Application code scaling, optimization, and/or performance evaluation

  9. System infrastructure

  10. Performance modeling

  11. Subsystem evaluations Ratio of time for all processes sending in halo update to time for a single sender Dual vs. single core performance evaluation using APEX-MAP (LBL) I/O performance characterization (LBL) Identifying performance anomalies (ANL) MPI performance characterization (ORNL)

  12. Application analyses and benchmarks Performance sensitivities (SDSC) Scalability optimizations (ORNL) Processing of genomes into domain maps: need improved load balancing that takes into account scale-free nature of the graphs. Porting and optimizing new applications (RENCI/NCSA)

  13. Tooldevelopment mpiP callsite profiling (LLNL/ORNL) SvPablo sourcecode-correlatedperformance analysis (RENCI) SCALASCA trace-based performance analysis(FZ-Jülich, UTenn)

  14. Co-principal investigators Argonne National Laboratory LawrenceBerkeley National Laboratory Lawrence Livermore National Laboratory Oak RidgeNational Laboratory Rice University University of California–Berkeley William Gropp Jeffrey Vetter Patrick Worley (PI) JohnMellor-Crummey Kathy Yelick David Bailey Leonid Oliker Bronisde Supinski Universityof Maryland University of Tennessee University of Wisconsin University of California–San Diego University of North Carolina Universityof Oregon Jack Dongarra Barton Miller Allen Malony Jeffrey Hollingsworth Daniel Reed Allan Snavely

  15. Contact Patrick H. Worley Computational Earth Sciences GroupComputer Science and Mathematics Division(865) 574-3128worleyph@ornl.gov Barbara Helland DOE Program ManagerOffice of Advanced Scientific Computing ResearchDOE Office of Science 15 Worley_PEAC_0711

More Related