1 / 31

An Update about ALICE Computing

An Update about ALICE Computing. Federico Carminati, Peter Hristov NEC’2011 Varna September 12-19, 2011. ALICE@NEC. NEC’2001 AliRoot for simulation NEC’2003 Reconstruction with AliRoot NEC’2005 AliRoot for analysis

borna
Download Presentation

An Update about ALICE Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An Update about ALICE Computing • Federico Carminati, Peter Hristov • NEC’2011 Varna • September 12-19, 2011

  2. ALICE@NEC • NEC’2001 AliRoot for simulation • NEC’2003 Reconstruction with AliRoot • NEC’2005 AliRoot for analysis • NEC’2007 Still no LHC data => Status and plans of the ALICE offline software. Calibration & Alignment • NEC’2009 In preparation for the first LHC data, no presentation • NEC’2011 Almost 2 years of stable data taking, a lot of published physics results => An update about the ALICE computing

  3. NA49 STAR 3 flavors Why HI collisions? • Study QCD at its natural energy scale T=ΛQCD=200 MeV by creating a state of matter at high density and temperature using high energetic heavy ion collisions. • Indication of trans. HG to QGP at Tc≅170 MeV εc≅1 GeV/fm3 • Phase trans. or crossover? • Intermediate phase of strongly interacting QGP? • Chiral symmetry restoration ? • Constituent mass  current mass ALICE

  4. History of High-Energy A+B Beams • BNL-AGS: mid 80’s, early 90’s • O+A, Si+A15 AGeV/c√sNN ~ 6 GeV • Au+Au11 AGeV/c√sNN ~ 5 GeV • CERN-SPS: mid 80’s, 90’s • O+A, S+A200 AGeV/c√sNN ~ 20 GeV • Pb+A160 AGeV/c√sNN ~ 17 GeV • BNL-RHIC: early 00’s • Au+Au√sNN ~ 130 GeV • p+p, d+Au √sNN ~ 200 GeV • LHC: 2010 (!) • Pb+Pb √sNN ~ 5,500 (2,760 in ’10-’12) GeV • p+p√sNN ~ 14,000 (7000 in ’10-’12) GeV 4

  5. ALICE Collaboration ~ 1/2 ATLAS, CMS, ~ 2x LHCb ~1000 people, 30 countries, ~ 80 Institutes Total weight 10,000t Overall diameter 16.00m Overall length 25m Magnetic Field 0.5Tesla 8 kHz (160 GB/sec) level 0 - special hardware 200 Hz (4 GB/sec) level 1 - embedded processors 30 Hz (2.5 GB/sec) level 2 - PCs 30 Hz (1.25 GB/sec) • A full pp programme • Data rate for pp is 100Hz@1MB data recording & offline processing 2

  6. Organization • Core Offline is CERN responsibility • Framework development • Coordination activities • Documentation • Integration • Testing & release • Resource planning • Each sub detector is responsible for its own offline system • It must comply with the general ALICE Computing Policy as defined by the Computing Board • It must integrate into the AliRoot framework http://aliweb.cern.ch/Offline/

  7. PLANNING IN PREPARING FOR BATTLE I ALWAYS FOUND PLANS USELESS BUT PLANNING ESSENTIAL GEN D.EISENHAUER (155 open items, 3266 total)

  8. RESOURCES • Sore point for ALICE computing

  9. end-user analysis CERN T0 Generationof calibrationparameters ordered analysis CAF analysis RAW to T1s T0 tape To Grid FC Tape T0 MC data Calibration Alien FC First pass Reco T1s T2s Disk buffer T0 Computing model – pp Pass 1& 2 reco 9

  10. end-user analysis CERN T0 Generationof calibrationparameters ordered analysis CAF analysis RAW From tape to T1s To tape To Grid FC Tape T0 MC data Calibration Alien FC First pass Reco Pilot Reco T1s T2s Disk buffer T0 Computing model – AA HI data taking LHC shutdown Pass 1& 2 reco 10

  11. Prompt reconstruction • Based on PROOF (TSelector) • Very useful for high-level QA and debugging • Integrated in the AliEVE event display • Full Offline code sampling events directly from DAQ memory

  12. Visualization V0 12

  13. ALICE Analysis Basic Concepts Ξ-→π-(Λ→pπ-) • Analysis Models • Prompt data processing (calib, align, reco, analysis) @CERN with PROOF • Batch Analysis using GRID infrastructure • Local analysis • Interactive analysis PROOF+GRID • User Interface • Access GRID via AliEn or ROOT UIs • PROOF/ROOT • Enabling technology for (C)AF • GRID API class TAliEn • Analysis Object Data contain only data needed for a particular analysis • Extensible with ∆-AODs • Same user code local, on CAF and Grid • Work on the distributed infrastructure has been done by the ARDA project Ω-→K-(Λ→pπ-) 13

  14. Eff cor Kine ESD TASK 1 TASK 2 TASK 3 TASK 4 AOD Analysis train • AOD production is organized in a ‘train’ of tasks • To maximize efficiency of full dataset processing • To optimize CPU/IO • Using the analysis framework • Needs monitoring of memory consumption and individual tasks

  15. Analysis on the Grid

  16. Production of RAW • Successful despite rapidly changing conditions in the code and detector operation • 74 major cycles • 7.2•109 events (RAW) passed through the reconstruction • Processed 3.6PB of data • Produced 0.37TB of ESDs and other data

  17. Registers output User Site Site Site Site Submits job Send results Fetch job Computing Agent Computing Agent Computing Agent Computing Agent Optimizer ALICE central services Sending jobs to data ALICE Job Catalogue ALICE File Catalogue 17

  18. xrootd (server) xrootd (server) xrootd (server) xrootd (manager) xrootd emulation (server) VOBOX::SA WN Castor DPM MSS MSS dCache Storage strategy Disk SRM SRM SRM SRM 18

  19. lfn→guid→(acl, size, md5) build pfn Tag catalogue ALICE FC xrootd SE & pfn File GUID, lfn or MD who has pfn? SE & pfn & envelope The access to the data Application Direct access to data via TAliEn/TGrid interface

  20. The ALICE way with XROOTD A globalized cluster ALICE global redirector A smart client could point here Xrootd Cmsd • Pure Xrootd + ALICE strong authz plugin. No difference among T1/T2 (only size and QOS) • WAN-wide globalized deployment, very efficient direct data access • Tier-0: CASTOR+Xrd serving data normally. • Tier-1: Pure Xrootd cluster serving conditions to ALL the GRID jobs via WAN • “Old” DPM+Xrootd in some tier2s Local clients work Normally at each site Xrootd site (GSI) Xrootd site (CERN) Any other Xrootd site Missing a file? Ask to the global redirector Get redirected to the right collaborating cluster, and fetch it. Immediately. Virtual Mass Storage System … built on data Globalization More details and complete info in “Scalla/Xrootd WAN globalization tools: where we are.” @ CHEP09

  21. WNPROOFXROOTD WN PROOFXROOTD WN PROOFXROOTD WN PROOFXROOTD WN PROOFXROOTD WN PROOF XROOTD PROOF master xrootd CASTOR CAF • Powerful and fast machinery – very popular with users • Allows for any use pattern, however quite often leading to contention for resources The whole CAF becomes a xrootd cluster Observed speedup Expected speedup 70% utilization 21

  22. Analysis facilities - profile • 1.8 PB of data through CAF, 550TB through SKAF • For comparison – on the Grid, we have written 15PB, • read 37PB

  23. The ALICE Grid • AliEn working prototype in 2002 • Single interface to distributed computing for all ALICE physicists • File catalogue, job submission and control, software management, user analysis • ~80 participating sites now • 1 T0 (CERN/Switzerland) • 6 T1s (France, Germany, Italy, The Netherlands, Nordic DataGrid Facility, UK) • KISTI and UNAM coming (!) • ~73 T2s spread over 4 continents • ~30,000 (out of ~150,000 WLCG) cores and 8.5 PB of disk • Resources are “pooled” together • No localization of roles / functions • National resources must integrate seamlessly into the global grid to be accounted for • FAs contribute proportionally to the number of PhDs (M&O-A share) • T3s have the same role than T2s, even if they do not sign the MoU http://alien.cern.ch

  24. All is in MonALISA

  25. GRID operation principle • The VO-box system (very controversial in the beginning) • Has been extensively tested • Allows for site services scaling • Is a simple isolation layer for the VO in case of troubles Central AliEn services Site VO-box Site VO-box Site VO-box Site VO-box Site VO-box WMS (gLite/ARC/OSG/Local) SM (dCache/DPM/CASTOR/xrootd) Monitoring, Package management

  26. Operation – central/site support • Central services support (2 FTEs equivalent) • There are no experts which do exclusively support – there are 6 highly-qualified experts doing development/support • Site services support - handled by ‘regional experts’ (one per country) in collaboration with local cluster administrators • Extremely important part of the system • In normal operation ~0.2FTEs/site • Regular weekly discussions and active all-activities mailing lists

  27. Summary • ALICE offline framework (AliRoot) is mature project that covers simulation, reconstruction, calibration, alignment, visualization and analysis • Successful operation with “real data” since 2009 • The results for several major physics conferences were obtained in time • The Grid and AF resources are adequate to serve the RAW/MC and user analysis tasks • More resources would be better of course • The sites operation is very stable • The gLite (EMI now) software is mature and few changes are necessary

  28. Some Philosophy

  29. The code • Move to C++ was probably inevitable • But it made a lot of “collateral damage” • Learning process was long, and it is still going on • Very difficult to judge what would have happened “had root not been there” • The most difficult question is now “what next” • A new language? there is none at the horizon • Different languages for different scopes (python, java, C, CUDA…) just think about debugging • A better discipline in using C++ (in ALICE no STL / templates) • Code management tools, build systems, (c)make, autotools • Still a lot of “glue” has to be provided, no comprehensive system “out of the box”

  30. The Grid • A half empty glass • We are still far from the “Vision” • A lot of tinkering and hand-holding to keep it alive • 4+1 solutions for each problem • We are just seeing now some light at the end of the tunnel of data management • The half full glass • We are using the Grid as a “distributed heterogeneous collection of high-end resources”, which was the idea after all • LHC physics is being produced by the Grid

  31. Grid need-to-have • Far more automation and resilience • Make the Grid less manpower intensive • More integration between workload management and data placement • Better control of upgrades (OS, MW) • Or better transparent integration of different OS/MW • Integration of the network as an active, provisionable resource • “Close” storage element, file replication / caching vs remote access • Better monitoring • Or perhaps simply more coherent monitoring...

More Related