1 / 44

On-line Automated Performance Diagnosis on Thousands of Processors

On-line Automated Performance Diagnosis on Thousands of Processors. Philip C. Roth. Future Technologies Group Computer Science and Mathematics Division Oak Ridge National Laboratory. Paradyn Research Group Computer Sciences Department University of Wisconsin-Madison.

nate
Download Presentation

On-line Automated Performance Diagnosis on Thousands of Processors

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. On-line Automated Performance Diagnosis on Thousands of Processors Philip C. Roth Future Technologies Group Computer Science and Mathematics Division Oak Ridge National Laboratory Paradyn Research Group Computer Sciences Department University of Wisconsin-Madison

  2. High Performance Computing Today • Large parallel computing resources • Tightly coupled systems (Earth Simulator, BlueGene/L, XT3) • Clusters (LANL Lightning, LLNL Thunder) • Grid • Large, complex applications • ASCI Blue Mountain job sizes (2001) • 512 cpus: 17.8% • 1024 cpus: 34.9% • 2048 cpus: 19.9% • Small fraction of peak performance is the rule

  3. Achieving Good Performance • Need to know what and where to tune • Diagnosis and tuning tools are critical for realizing potential of large-scale systems • On-line automated tools are especially desirable • Manual tuning is difficult • Finding interesting data in large data volume • Understanding application, OS, hardware interactions • Automated tools require minimal user involvement; expertise is built into the tool • On-line automated tools can adapt dynamically • Dynamic control over data volume • Useful results from a single run • But: tools that work well in small-scale environments often don’t scale

  4. Barriers to Large-Scale Performance Diagnosis d0 d1 d2 d3 dP-4 dP-3 dP-2 dP-1 • Managing performance data volume • Communicating efficiently between distributed tool components • Making scalable presentation of data and analysis results Tool Front End Tool Daemons App Processes a0 a1 a2 a3 aP-4 aP-3 aP-2 aP-1

  5. Our Approach for Addressing These Scalability Barriers • MRNet: multicast/reduction infrastructure for scalable tools • Distributed Performance Consultant: strategy for efficiently finding performance bottlenecks in large-scale applications • Sub-Graph Folding Algorithm: algorithm for effectively presenting bottleneck diagnosis results for large-scale applications

  6. Outline • Performance Consultant • MRNet • Distributed Performance Consultant • Sub-Graph Folding Algorithm • Evaluation • Summary

  7. Performance Consultant • Automated performance diagnosis • Search for application performance problems • Start with global, general experiments (e.g., test CPUbound across all processes) • Collect performance data using dynamic instrumentation • Collect only the data desired • Remove the instrumentation when no longer needed • Make decisions about truth of each experiment • Refine search: create more specific experiments based on “true” experiments (those whose data is above user-configurable threshold)

  8. Performance Consultant c001.cs.wisc.edu c002.cs.wisc.edu c128.cs.wisc.edu myapp367 myapp4287 myapp27549

  9. Performance Consultant c001.cs.wisc.edu c002.cs.wisc.edu c128.cs.wisc.edu CPUbound myapp367 myapp4287 myapp27549 … … main c001.cs.wisc.edu c002.cs.wisc.edu c128.cs.wisc.edu Do_row Do_col myapp{367} myapp{4287} myapp{27549} Do_mult main main main … … … Do_row Do_col Do_row Do_col Do_row Do_col Do_mult Do_mult Do_mult … … …

  10. Performance Consultant cham.cs.wisc.edu c001.cs.wisc.edu c002.cs.wisc.edu c128.cs.wisc.edu CPUbound myapp367 myapp4287 myapp27549 … … main c001.cs.wisc.edu c002.cs.wisc.edu c128.cs.wisc.edu Do_row Do_col myapp{367} myapp{4287} myapp{27549} Do_mult main main main … … … Do_row Do_col Do_row Do_col Do_row Do_col Do_mult Do_mult Do_mult … … …

  11. Outline • Performance Consultant • MRNet • Distributed Performance Consultant • Sub-Graph Folding Algorithm • Evaluation • Summary

  12. MRNet: Multicast/Reduction Overlay Network • Parallel tool infrastructure providing: • Scalable multicast • Scalable data synchronization and transformation • Network of processes between tool front-end and back-ends • Useful for parallelizing and distributing tool activities • Reduce latency • Reduce computation and communication load at tool front-end • Joint work with Dorian Arnold (University of Wisconsin-Madison)

  13. Typical Parallel Tool Organization Tool Front End Tool Daemons d0 d1 d2 d3 dP-4 dP-3 dP-2 dP-1 App Processes a0 a1 a2 a3 aP-4 aP-3 aP-2 aP-1

  14. MRNet-based Parallel Tool Organization Internal Process Filter Multicast/ Reduction Network Tool Front End Tool Daemons d0 d1 d2 d3 dP-4 dP-3 dP-2 dP-1 App Processes a0 a1 a2 a3 aP-4 aP-3 aP-2 aP-1

  15. Outline • Performance Consultant • MRNet • Distributed Performance Consultant • Sub-Graph Folding Algorithm • Evaluation • Summary

  16. Performance Consultant: Scalability Barriers • MRNet can alleviate scalability problem for global performance data (e.g., CPU utilization across all processes) • But front-end still processes local performance data (e.g., utilization of process 5247 on host mcr398.llnl.gov)

  17. Performance Consultant cham.cs.wisc.edu c001.cs.wisc.edu c002.cs.wisc.edu c128.cs.wisc.edu CPUbound myapp367 myapp4287 myapp27549 … … main c001.cs.wisc.edu c002.cs.wisc.edu c128.cs.wisc.edu Do_row Do_col myapp{367} myapp{4287} myapp{27549} Do_mult main main main … … … Do_row Do_col Do_row Do_col Do_row Do_col Do_mult Do_mult Do_mult … … …

  18. Distributed Performance Consultant cham.cs.wisc.edu c001.cs.wisc.edu c002.cs.wisc.edu c128.cs.wisc.edu CPUbound myapp367 myapp4287 myapp27549 … … main c001.cs.wisc.edu c002.cs.wisc.edu c128.cs.wisc.edu Do_row Do_col myapp{367} myapp{4287} myapp{27549} Do_mult main main main … … … Do_row Do_col Do_row Do_col Do_row Do_col Do_mult Do_mult Do_mult … … …

  19. Distributed Performance Consultant: Variants • Natural steps from traditional centralized approach (CA) • Partially Distributed Approach (PDA) • Distributed local searches, centralized global search • Requires complex instrumentation management • Truly Distributed Approach (TDA) • Distributed local searches only • Insight into global behavior from combining local search results (e.g., using Sub-Graph Folding Algorithm) • Simpler tool design than PDA

  20. Distributed Performance Consultant: PDA cham.cs.wisc.edu c001.cs.wisc.edu c002.cs.wisc.edu c128.cs.wisc.edu CPUbound myapp367 myapp4287 myapp27549 … … main c001.cs.wisc.edu c002.cs.wisc.edu c128.cs.wisc.edu Do_row Do_col myapp{367} myapp{4287} myapp{27549} Do_mult main main main … … … Do_row Do_col Do_row Do_col Do_row Do_col Do_mult Do_mult Do_mult … … …

  21. Distributed Performance Consultant: TDA cham.cs.wisc.edu c001.cs.wisc.edu c002.cs.wisc.edu c128.cs.wisc.edu myapp367 myapp4287 myapp27549 … … c001.cs.wisc.edu c002.cs.wisc.edu c128.cs.wisc.edu myapp{367} myapp{4287} myapp{27549} main main main … … Do_row Do_col Do_row Do_col Do_row Do_col Do_mult Do_mult Do_mult … … …

  22. Distributed Performance Consultant: TDA cham.cs.wisc.edu c001.cs.wisc.edu c002.cs.wisc.edu c128.cs.wisc.edu myapp367 myapp4287 myapp27549 … … Sub-Graph Folding Algorithm c001.cs.wisc.edu c002.cs.wisc.edu c128.cs.wisc.edu myapp{367} myapp{4287} myapp{27549} main main main … … Do_row Do_col Do_row Do_col Do_row Do_col Do_mult Do_mult Do_mult … … …

  23. Outline • Paradyn and the Performance Consultant • MRNet • Distributed Performance Consultant • Sub-Graph Folding Algorithm • Evaluation • Summary

  24. Search History Graph Example CPUbound c33.cs.wisc.edu c34.cs.wisc.edu main myapp{7624} myapp{1272} myapp{1273} myapp{7625} A B main C main main main A B D A B A B A B C C C C D D D E D

  25. Search History Graphs • Search History Graph is effective for presenting search-based performance diagnosis results… • …but it does not scale to a large number of processes because it shows one sub-graph per process

  26. Sub-Graph Folding Algorithm • Combines host-specific sub-graphs into composite sub-graphs • Each composite sub-graph represents a behavioral category among application processes • Dynamic clustering of processes by qualitative behavior

  27. SGFA: Example c*.cs.wisc.edu myapp{*} D E CPUbound c33.cs.wisc.edu c34.cs.wisc.edu main myapp{7624} myapp{1272} myapp{1273} myapp{7625} A B main C main main main A B D A B A B A B C C C C D D D E D

  28. SGFA: Implementation • Custom MRNet filter • Filter in each MRNet process keeps folded graph of search results from all reachable daemons • Updates periodically sent upstream • By induction, filter in front-end holds entire folded graph • Optimization for unchanged graphs

  29. Outline • Performance Consultant • MRNet • Distributed Performance Consultant • Sub-Graph Folding Algorithm • Evaluation • Summary

  30. DPC + SGFA: Evaluation • Modified Paradyn to perform bottleneck searches using CA, PDA, or TDA approach • Modified instrumentation cost tracking to support PDA • Track global, per-process instrumentation cost separately • Simple fixed-partition policy for scheduling global and local instrumentation • Implemented Sub-Graph Folding Algorithm as custom MRNet filter to support TDA (used by all) • Instrumented front-end, daemons, and MRNet internal processes to collect CPU, I/O load information

  31. DPC + SGFA: Evaluation • su3_rmd • QCD pure lattice gauge theory code • C, MPI • Weak scaling scalability study • LLNL MCR cluster • 1152 nodes (1048 compute nodes) • Two 2.4 GHz Intel Xeons per node • 4 GB memory per node • Quadrics Elan3 interconnect (fat tree) • Lustre parallel file system

  32. DPC + SGFA: Evaluation • PDA and TDA: bottleneck searches with up to 1024 processes so far, limited by partition size • CA: scalability limit at less than 64 processes • Similar qualitative results from all approaches

  33. DPC: Evaluation

  34. DPC: Evaluation

  35. DPC: Evaluation

  36. DPC: Evaluation

  37. DPC: Evaluation

  38. DPC: Evaluation

  39. DPC: Evaluation

  40. DPC: Evaluation

  41. DPC: Evaluation

  42. SGFA: Evaluation

  43. Summary • Tool scalability is critical for effective use of large-scale computing resources • On-line automated performance tools are especially important at large scale • Our approach: • MRNet • Distributed Performance Consultant (TDA) plus Sub-Graph Folding Algorithm

  44. References • P.C. Roth, D.C. Arnold, and B.P. Miller, “MRNet: a Software-Based Multicast/Reduction Network for Scalable Tools,” SC 2003, Phoenix, Arizona, November 2003 • P.C. Roth and B.P. Miller, “The Distributed Performance Consultant and the Sub-Graph Folding Algorithm: On-line Automated Performance Diagnosis on Thousands of Processes,” in submission • Publications available from http://www.paradyn.org • MRNet software available from http://www.paradyn.org/mrnet

More Related