1 / 10

Preparation for the next LHC11h pass: TPC reconstruction

Preparation for the next LHC11h pass: TPC reconstruction. P. Hristov 28/08/12. Introduction. The TPC reconstruction is the main memory consumer in PbPb data processing LHC11h data are “difficult” to process: trigger bias towards more central events => higher multiplicity

artan
Download Presentation

Preparation for the next LHC11h pass: TPC reconstruction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Preparation for the next LHC11h pass: TPC reconstruction P. Hristov 28/08/12

  2. Introduction • The TPC reconstruction is the main memory consumer in PbPb data processing • LHC11h data are “difficult” to process: • trigger bias towards more central events => higher multiplicity • Lower HV => less clusters, more combinations & fakes • more background => more combinations & fakes • higher luminosity => more pileup • HLT compression => some effects needed attention • LHC11h Pass 2: took ~2x longer than foreseen

  3. LHC11h Pass2 – reconstruction details • Use v5-01-Rev-19 in the production • Start in inverse time order (last runs first, “LIFO”): OK • Use MB trigger for CPass0: OK • Exercise the full production setup on runs from “grey area”: special “gdb” production, run 170593 (catch and cure the exceptions): OK • Run with TPC pools (avoids memory trashing): OK • Work on a local raw file (avoids xrootd overhead): OK • Use OCDB snapshot (minimizes load): OK • Keep only the rec. points for the current event (minimizes the local workspace): OK • Switch off QA (memory overhead): OK • Deterministic splitting of the failed jobs (minimizes the memory since the tree buffers grow less?): OK • Final result: 95% of raw files were successfully reconstructed

  4. Preparation for LHC11h • The resubmission of the jobs that exceeded the memory limits is very inefficient • The deterministic splitting has its own disadvantages: • It is done only after 3 resubmissions of the full job • It needs a collection of failed jobs • It may need its own resubmissions • The amount of files registered in the catalog grows • We have to reduce the memory that the reconstruction needs • Alternatively, the management can discuss and buy more resources. This option is not considered for this meeting • The main consumer is TPC

  5. Massif report (file 11000167920023.36.root, 267 ev.)

  6. Allocation details

  7. Observations • The TPC clusters (offline + HLT) allocate 343.4+141.6 = 485 Mb • Are all duplications removed? • Should we add also 84.8 Mb from AliTPCtrackerSector? • The seeding needs 473.6 MB together with the tree buffers (97.6 Mb in the streamers) • The calibration objects are not taken into account

  8. Possible actions • Improve the existing reconstruction • Check again the information in the objects • Example: track points in each seed that contain the same info as clusters • Compressed values? • Check for cluster copies that can be avoided • Do not keep all the clusters, load them when needed • Separate A and C side, use matching of the tracks • Keep only part of the sectors in memory: this may require significant changes in the code • Reduce the number of seeds • Use more strict criteria • Decide earlier to drop some seeds • Replace some parts of the reconstruction (effects on the calibration?) • HLT clusterization also in offline (in case of uncompressed data) • Use different seeding algorithm for “golden seeds” and combinatorial algorithm afterwards • Hough transform • HLT algorithm (is it allocating less memory?)

  9. Conclusions • The present situation makes it difficult/not possible to run the next pass on LHC11h • We need some urgent work on the TPC reconstruction • The “reference sample” of RAW events should help to speed up the tests and to estimate the performance • Probably other solutions are possible, the list I presented is to start the discussion

  10. Plans • Who? • What? • When?

More Related