1 / 41

ALICE Computing TDR

ALICE Computing TDR. Federico Carminati June 29, 2005. Layout. Parameters Computing framework Distributed computing and Grid Project Organisation and planning Resources needed http://aliceinfo.cern.ch/NewAlicePortal/en/Collaboration/Documents/TDR/Computing.html. Parameters

bell
Download Presentation

ALICE Computing TDR

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ALICE Computing TDR Federico Carminati June 29, 2005

  2. Layout • Parameters • Computing framework • Distributed computing and Grid • Project Organisation and planning • Resources needed • http://aliceinfo.cern.ch/NewAlicePortal/en/Collaboration/Documents/TDR/Computing.html ALICE Computing TDR

  3. Parameters • Computing framework • Distributed computing and Grid • Project Organisation and planning • Resources needed ALICE Computing TDR

  4. Parameters ALICE Computing TDR

  5. Parameters • Computing framework • Distributed computing and Grid • Project Organisation and planning • Resources needed ALICE Computing TDR

  6. Offline framework • AliRoot in development since 1998 • Two main packages to install (ROOT and AliRoot) • Ported on Linux (IA32/64 & AMD), Mac OS X (ppc & Intel), Digital True64, SunOS… • Over 50 developers (30% CERN), one CVS • Integration with DAQ (data recorder) and HLT (CVS) • Abstract interfaces for modularity • Subset of C++ used for maximum portability • Very close integration of physicists and programmers ALICE Computing TDR

  7. G4 G3 FLUKA AliRoot layout Virtual MC ISAJET EGEE+AliEn AliRoot AliReconstruction HIJING AliSimulation EVGEN MEVSIM STEER PYTHIA6 PDF PMD EMCAL TRD ITS PHOS TOF ZDC RICH HBTP STRUCT CRT START FMD MUON TPC RALICE ESD HBTAN JETAN AliAnalysis ROOT ALICE Computing TDR

  8. Software management • Major release ~ six months, minor (tag) ~ monthly • Emphasis on delivering production code • Nightly produced UML diagramscode listingcoding rule violationsbuild and tests • No version management software • Development of new coding tools (IRST) • Smell detection, aspect oriented programming, genetic testing ALICE Computing TDR

  9. Detector Construction Database (DCDB) • Detector construction in distributed environment: • Sub-detector groups work independently • Data collected in a central repository to facilitate movement of components between groups and during integration and operation • Different user interfaces • WEB portal • LabView, XML • ROOT for visualisation • In production since 2002 • Important spin-offs • Cable Database • Calibration Database ALICE Computing TDR

  10. Simulation • Simulation performed with G3 till now • Will move to FLUKA and G4 • VMC insulates users from the transport MC • New geometrical modeller in production • Physics Data Challenge ‘05 with FLUKA • Interface with G4 designed (4Q’05?) • Test-beam validation activity ongoing ALICE Computing TDR

  11. G3 G3 transport User Code VMC G4 G4 transport FLUKA transport FLUKA Reconstruction Geometrical Modeler Visualisation Generators The Virtual MC ALICE Computing TDR

  12. TGeo modeller ALICE Computing TDR

  13. 5000 1GeV/c protons in 60T field Results HMPID 5 GeV Pions Geant3 FLUKA ALICE Computing TDR

  14. Reconstruction strategy • Very high flux, TPC occupancy up to 40% • Maximum information approach • Optimization of access and use of information • Localize relevant information • Keep this information until it is needed ALICE Computing TDR

  15. ITS & TPC & TOF Efficiency Contamination Tracking & PID TPC ITS+TPC+TOF+TRD • PIV 3GHz – (dN/dy – 6000) • TPC tracking - ~ 40s • TPC kink finder ~ 10 s • ITS tracking ~ 40 s • TRD tracking ~ 200 s ALICE Computing TDR

  16. Condition and alignment • Heterogeneous sources periodically polled • ROOT files with condition information created • Published on the Grid and distributed as needed by the Grid DMS • Files contain validity information and are identified via DMS metadata • No need for a distributed DBMS • Reuse of the existing Grid services ALICE Computing TDR

  17. External relations and DB connectivity From URs: Source, volume, granularity, update frequency, access pattern, runtime environment and dependencies Physicsdata files calibration procedures API ECS DAQ API calibration files Trigger API Calibration classes Grid FC metadata file store DCS API AliRoot DCDB API API HLT Collection of UR from detectors API ALICE Computing TDR API – Application Program Interface

  18. Metadata • Essential for the selection of events • Grid file catalogue for file-level MD • Need event-level MetaData • Collaboration with STAR • Prototype in preparation for PDC’05-III ALICE Computing TDR

  19. ALICE CDC’s ALICE Computing TDR

  20. Parameters • Computing framework • Distributed computing and Grid • Project Organisation and planning • Resources needed ALICE Computing TDR

  21. ALICE Physics Data Challenges ALICE Computing TDR

  22. PDC04 schema AliEn job control Data transfer Production of RAW Shipment of RAW to CERN Reconstruction of RAW in all T1’s CERN Analysis Tier2 Tier1 Tier1 Tier2 ALICE Computing TDR

  23. Goals, structure and tasks • Validate the computing model with ~10% of the SDTY data • Use the offline chain, the Grid, PROOF and the ALICE ARDA prototype • Production of underlying Pb+Pb and p+p events • Completed June 2004 • Mixing of different signal events with underlying Pb+Pb events (up to 50 times) • Completed September 2004 • Distributed analysis • Delayed ALICE Computing TDR

  24. Phase 2 job structure Central servers Register in AliEn FC: LCG SE: LCG LFN = AliEn PFN Master job submission, Job Optimizer (N sub-jobs), RB, File catalogue, processes monitoring and control, SE… Simulate the event reconstruction and remote event storage Sub-jobs Sub-jobs AliEn-LCG interface Underlying event input files Storage Completed Sep. 2004 CERN CASTOR: underlying events RB Storage CEs CEs CERN CASTOR: backup copy Job processing Job processing Output files Output files zip archive of output files Local SEs Local SEs File catalogue Primary copy Primary copy edg(lcg) copy&register ALICE Computing TDR

  25. Phase 1 Phase 2 Summary • 33 sites, 3 grids • AliEn/LCG: P1 75/25%, P2 89/11% • 400 000 jobs, 6 hours/job, 750 MSi2K hours • 9M entries in the AliEn file catalogue • 4M physical files at 20 AliEn SEs world-wide • 30 TB@CERN CASTOR • 10 TB@remote SEs + 10 TB backup@CERN • 200 TB network transfer CERN –> remote centres • ~ 1 000 parameters • 24M records@1’ intervals ALICE Computing TDR

  26. Summary of PDC’04 • Computer Centres • Tuning of environment • Availability of resources • Middleware • Phase 1&2 successful • AliEn fully functional • LCG not yet ready • No AliEn development for phase 3, LCG not ready • Computing model validation • AliRoot worked well • Data Analysis partially tested on local CN, distributed analysis prototype demonstrated ALICE Computing TDR

  27. Development of Analysis • ROOT & a small library on AOD’s and ESD’s • Work on distributed analysis from ARDA • Batch prototype tested at the end 2004 • Interactive prototype demonstrated end 2004 • Physics Working Groups providing requirements • Planning for fast analysis facility at CERN ALICE Computing TDR

  28. LCG Site A Site B PROOF SLAVE SERVERS PROOF SLAVE SERVERS Proofd Rootd Forward Proxy Forward Proxy New Elements Optional Site Gateway Only outgoing connectivity Site <X> Slave ports mirrored on Master host Proofd Startup Slave Registration/ Booking- DB Grid Service Interfaces TGrid UI/Queue UI Master Setup Grid Access Control Service PROOF Master PROOF Steer Grid/Root Authentication “Standard” Proof Session Grid File/Metadata Catalogue Master Booking Request with logical file names Client retrieves list of logical file (LFN + MSN) PROOF Client Grid-Middleware independend PROOF Setup Client ALICE Computing TDR

  29. ALICE computing model • pp • Quasi-online data distribution and first reconstruction at T0 • Further reconstructions at T1’s • AA • Calibration, alignment and pilot reconstructions during data taking • Data distribution and first reconstruction at T0 during four months after AA • Further reconstructions at T1’s • One copy of RAW at T0 and one distributed at T1’s ALICE Computing TDR

  30. ALICE computing model • T0 • First pass reconstruction, storage of one copy of RAW, calibration data and first-pass ESD’s • T1 • Reconstructions and scheduled analysis, storage of the second collective copy of RAW and one copy of all data to be kept, disk replicas of ESD’s and AOD’s • T2 • Simulation and end-user analysis, disk replicas of ESD’s and AOD’s • Difficult to estimate network load ALICE Computing TDR

  31. ALICE T1’s & T2’s • T1 for ALICE: CERN, CCIN2P3, CNAF, GridKa, NIKHEF, NGDF, RAL, USA (under discussion) • T2 for ALICE: CERN, CCIN2P3, Nantes, Clermont-Ferrand, Paris, Bari, Catania, Legnaro, Torino, GSI, RUSSIA, Prague, Korea, Kolkata, Wuhan, Cape Town, USA, UK Grid, Athens ALICE Computing TDR

  32. ALICE MW requirements • Baseline Services available on LCG (in three flavours?) • An agreed standard procedure to deploy and operate VO-specific services • The tests of the integration of the components have started ALICE Computing TDR

  33. Parameters • Computing framework • Distributed computing and Grid • Project Organisation and planning • Resources needed ALICE Computing TDR

  34. Detector Projects Software Projects Management Board LCG SC2, PEB, GDB, POB US Grid coord. EU Grid coord. DAQ HLT Regional Tiers Computing Board Chair: Comp Coord Offline Coord. (Deputy PL) International Offline Board Core Computing and Software • Production Environment Coord. • Production environment (simulation, reconstruction & analysis) • Distributed computing environment • Database organisation • Framework & Infrastructure Coord. • Framework development (simulation, reconstruction & analysis) • Persistency technology • Computing data challenges • Industrial joint projects • Tech. Tracking • Documentation • Simulation Coord. • Detector Simulation • Physics simulation • Physics validation • GEANT 4 integration • FLUKA integration • Radiation Studies • Geometrical modeler • Reconstruction & Physics Soft Coord. • Tracking • Detector reconstruction • Global reconstruction • Analysis tools • Analysis algorithms • Physics data challenges • Calibration & alignment algorithms • Offline Coordination • Resource planning • Relation with funding agencies • Relations with C-RRB ALICE Computing TDR

  35. Funding Comp proj M&O A Detector proj Physics WGs CERN Core Offline Offline activities in the other projects Extended Core Offline 20~15 10~7 100~500? Subdetector Software Physics Analysis Software Central Support Offline Coordination Core Software Infrastructure & Services Core Computing Computing Project Core Computing Staffing • The call to the collaboration has been successful • Four people have been sent to CERN to help with CORE Computing • Securing few long-term position is still a very high priority ALICE Computing TDR

  36. Milestones • Jun 05: prototype of the condition infrastructure • Jun 05: FLUKA MC in production with new geometrical modeller • Jun 05: Computing TDR completed • Aug 05: PDC05 simulation with Service Data Challenge 3 • Sep 05: MetaData prototype infrastructure ready • Nov 05: Analysis of PDC05 data • Dec 05: condition infrastructure deployed • Dec 05: alignment and calibration algorithms for all detectors • Jun 06: PDC 06 successfully executed (depends on SDC4) • Jun 06: alignment and calibration final algorithms • Dec 06: AliRoot ready for data taking ALICE Computing TDR

  37. Parameters • Computing framework • Distributed computing and Grid • Project Organisation and planning • Resources needed ALICE Computing TDR

  38. Summary of needs ALICE Computing TDR

  39. Capacity profile ALICE Computing TDR

  40. Summary pledged resources ALICE Computing TDR

  41. Conclusions • ALICE computing choices have been validated by experience • The Offline development is on schedule, although contingency is scarce • Collaboration between physicists and computer scientists is excellent • Integration with ROOT allows fast prototyping and development cycle • Early availability of Baseline Grid services and their integration with the ALICE-specific services will be crucial • This is a major “risk factor” for ALICE readiness • The development of the analysis infrastructure is particularly late • The manpower situation for the core team has been stabilised thanks to the help from the collaboration • The availability of few CERN long-term positions is of the highest priority ALICE Computing TDR

More Related