1 / 26

Supercomputer Performance Characterization

Supercomputer Performance Characterization. Wayne Pfeiffer July 17, 2006. Here are some important computer performance questions. What key computer system parameters determine performance? What synthetic benchmarks can be used to characterize these system parameters?

kami
Download Presentation

Supercomputer Performance Characterization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SupercomputerPerformance Characterization Wayne Pfeiffer July 17, 2006

  2. Here are some important computer performance questions • What key computer system parameters determine performance? • What synthetic benchmarks can be used to characterize these system parameters? • How does performance on synthetics compare between computers? • How does performance on applications compare between computers? • How does performance scale (i.e., vary with processor count)?

  3. Comparative performance results have been obtainedon six computers at NCSA & SDSC,all with > 1,000 processors

  4. These computers have shared-memory nodesof widely varying size connected by different switch types • Blue Gene • Massively parallel processor system with low-power, 2p nodes • Two custom switches for point-to-point and collective communication • Cobalt • Cluster of two large, 512p nodes (also called a constellation) • Custom switch within nodes & commodity switch between nodes • DataStar • Cluster of 8p nodes • Custom high-performance switch called Federation • Mercury, Tungsten, & T2 • Clusters of 2p nodes • Commodity switches

  5. Performance can be better understood with a simple model • Total run time can be split into three components: ttot = tcomp + tcomm + tio • Overlap may exist. If so, it can be handled as follows: tcomp = computation time tcomm = communication time that can’t be overlapped with tcomp tio = I/O time that can’t be overlapped with tcomp & tcomm • Relative values vary depending upon computer, application, problem, & number of processors

  6. Run-time components depend uponsystem parameters & code features Differences between point-to-point & collective communication are important too

  7. Compute, communication, & I/O speeds have been measured for many synthetic & application benchmarks • Synthetic benchmarks • sloops (includes daxpy & dot) • HPL (Linpack) • HPC Challenge • NAS Parallel Benchmarks • IOR • Application benchmarks • Amber 9 PMEMD (biophysics: molecular dynamics) • … • WRF (atmospheric science: weather prediction)

  8. Normalized memory access profiles for daxpyshow better memory access, but more memory contentionon Blue Gene compared DataStar

  9. Each HPCC synthetic benchmark measures one or two system parameters in varying combinations

  10. Relative speeds are shown for HPCC benchmarks on 6 computersat 1,024p; 4 different computers are fastest depending upon benchmark;2 of these are also slowest, depending upon benchmark Data available soon at CIP Web site: www.ci-partnership.org

  11. Absolute speeds are shown for HPCC & IOR benchmarkson SDSC computers; TG processors are fastest, BG & DS interconnects are fastest, & all three computers have similar I/O rates

  12. Relative speeds are shown for 5 applications on 6 computersat various processor counts; Cobalt & DataStar are generally fastest

  13. Good scaling is essential to take advantageof high processors counts • Two types of scaling are of interest • Strong: performance vs processor count (p) for fixed problem size • Weak: performance vs p for fixed work per processor • There are several ways of plotting scaling • Run time (t) vs p • Speed (1/t) vs p • Speed/p vs p • Scaling depends significantly on the computer, application, & problem • Use log-log plot to preserve ratios when comparing computers

  14. AWM 512^3 problem shows good strong scaling to 2,048pon Blue Gene & to 512p on DataStar, but not on TeraGrid cluster Data from Yifeng Cui

  15. MILC medium problem shows superlinear speedupon Cobalt, Mercury, & DataStar at small processor counts;strong scaling ends for DataStar & Blue Gene above 2,048p

  16. NAMD ApoA1 problem scales best on DataStar & Blue Gene;Cobalt is fastest below 512p, but the same speed as DataStar at 512p

  17. WRF standard problem scales best on DataStar;Cobalt is fastest below 512p, but the same speed as DataStar at 512p

  18. Communication fraction generally grows with processor count in strong scaling scans, such as for WRF standard problem on DataStar

  19. A more careful look at Blue Gene shows many pluses +Hardware is more reliable than for other high-end systems installed at SDSC in recent years + Compute times are extremely reproducible + Networks scale well + I/O performance with GPFS is good at high p + Price per peak flop/s is low + Power per flop/s is low + Footprint is small

  20. But there are also some minuses -Processors are relatively slow • Clock speed is 700 MHz • Compilers seldom use second FPU in each processor (though optimized libraries do) - Applications must scale well to get high absolute performance - Memory is only 512 MB/node, so some problems don’t fit • Coprocessor mode can be used (with 1p/node), but this is inefficient • Some problems still don’t fit even in coprocessor mode - Cross-compiling complicates software development for complex codes

  21. Major applications ported and being run on BG at SDSC span various disciplines

  22. Speed of BG relative to DataStar varies about clock speed ratio(0.47 = 0.7/1.5) for applications on ≥ 512p;CO & VN mode perform similarly (per MPI p)

  23. DNS scaling on BG is generally better than on DataStar,but shows unusual variation; VN mode is somewhat slower than CO mode (per MPI p) Data from Dmitry Pekurovsky

  24. If number of allocated processors is considered,then VN mode is faster than CO mode,and both modes show unusual variation Data from Dmitry Pekurovsky

  25. IOR weak scaling scans using GPFS-WAN show BG in VN modeachieves 3.4 GB/s for writes (~DS) & 2.7 GB/s for reads (>DS)

  26. Blue Gene has more limited applicability than DataStar,but is a good choice if the application is right + Some applications run relatively fast & scale well + Turnaround is good with only a few users + Hardware is reliable & easy to maintain - Other applications run relatively slowly and/or don’t scale well - Some typical problems need to run in CO mode to fit in memory - Other typical problems won’t fit at all

More Related