1 / 16

HLT Data Challenge

HLT Data Challenge. - PC² - - Setup / Results – - Clusterfinder Benchmarks – - Setup / Results –. PC² Paderborn. PC² - Paderborn Center for Parallel Computing Architecture of the ARMINIUS cluster 200 nodes with Dual Intel Xeon 64-bit, 3.2 GHz 800 GByte main memory (4 GByte each)

Download Presentation

HLT Data Challenge

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HLT Data Challenge • - PC² - - Setup / Results – - Clusterfinder Benchmarks – - Setup / Results – Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg

  2. PC² Paderborn PC² - Paderborn Center for Parallel Computing • Architecture of the ARMINIUS cluster • 200 nodes with Dual Intel Xeon 64-bit, 3.2 GHz • 800 GByte main memory (4 GByte each) • InfiniBand network • Gigabit Ethernet network • RedHat Linux 4 Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg

  3. General Test - Configuration • Hardware Configuration • 200 nodes with Dual 3.2 GHz Intel Xeon CPUs • Gigabit Ethernet • Framework Configuration • HLT Data Framework with TCP Dump Subscriber processes (TDS) • HLT Online Display connecting to TDS • Software Configuration • RHEL 4 update 1 • RHEL 2.6.9 kernel version • 2.6 bigphys area patch • PSI2 driver for 2.6 Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg

  4. Full TPC (36 slices) on 188 nodes (I) • Hardware Configuration • 188 nodes with Dual 3.2 GHz Intel Xeon CPUs • Framework Configuration • Compiled in debug mode, no optimizations • Setup per slice (6 incoming DDLs) • 3 nodes for cluster finding each node with 2 filepublisher processes and 2 cluster finding processes • 2 nodes for tracking each node with 1 tracking processes • 8 Global Merger processes • merging the tracks of the 72 tracking nodes Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg

  5. Full TPC (36 slices) on 188 nodes (II) Simulated TPC data Node Node Node Node CF CF CF CF CF CF TR CF CF CF CF CF CF CF CF CF CF CF CF DDL DDL DDL DDL DDL DDL Patch Patch Patch Patch Patch Patch Node Node Node GM GM GM Online Display CF CF CF CF CF CF Node TR . . . . . . Framework Setup HLT Data Framework setup for 1 slice Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg

  6. Full TPC (36 slices) on 188 nodes (III) • Empty Events • Real data format, empty events, no hits/tracks • Rate approx. 2.9 kHz after tracking • Limited by the filepublisher processes Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg

  7. Full TPC (36 slices) on 188 nodes (IV) • Simulated Events • simulated pp data (14 TeV, 0.5 T) • Rate approx. 220 Hz after tracking • Limited by the tracking processes • Solution: use more nodes Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg

  8. Conclusion of Full TPC Test • Main bottleneck is the processing of the data itself • The system is not limited by the HLT data transport framework • Test limitations by number of available nodes Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg

  9. „Test Setup“ Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg

  10. Clusterfinder Benchmarks (CFB) • pp – Events • 14 TeV , 0.5 T • Number of Events: 1200 • Iterations: 100 • TestBench: SimpleComponentWrapper • TestNodes: • HD ClusterNodes e304, e307 (PIII, 733 MHz) • HD ClusterNodes e106, e107 (PIII, 800 MHz) • HD GatewayNode alfa (PIII, 1.0 GHz) • HD ClusterNode eh001 (Opteron, 1.6 GHz) • CERN ClusterNode eh000 (Opteron, 1.8 GHz) Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg

  11. CFB – Signal Distribution per patch Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg

  12. CFB – Cluster Distribution per patch Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg

  13. CFB – PadRow / Pad Distribution Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg

  14. CFB – Timing Results (I) Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg

  15. CFB - Timing Results (II) Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg

  16. CFB – Conclusion / Outlook • Learned about different needs for each patch • Number of processing components have to be adjusted to particular patch Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg

More Related