1 / 42

The MEG Software Project

The MEG Software Project. Offline Architecture Computing Model Status of the Software Organization. Corrado Gatto. PSI 9/2/2005. The Offline Architecture. Dataflow and Reconstruction Requirements. 100 Hz L3 trigger evt size : 1.2 MB Raw Data throughput:

thomaswhite
Download Presentation

The MEG Software Project

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The MEG Software Project Offline Architecture Computing Model Status of the Software Organization Corrado Gatto PSI 9/2/2005

  2. The Offline Architecture

  3. Dataflow and Reconstruction Requirements • 100 Hz L3 trigger • evt size : 1.2 MB • Raw Data throughput: (10+10)Hz´1.2Mb/Phys evt ´0.1 + 80Hz´0.01Mb/bkg evt = 3.5Mb/s • <evt size> : 35 kB • Total raw data storage: 3.5Mb/s ´107s = 35 TB/yr

  4. Requirements for Software Architecture • Geant3 compatible (at least at the beginning) • Easy interface with existing packages: • Geant3 , Geant4, external (fortran) event generators • Scalability • Simple structure to be used by non-computing experts • Written and maintained by few people • Portability • Use a world-wide accepted framework Use ROOT + An existing Offline Package as starting point

  5. Expected Raw Performance Pure Linux setup 20 data sources FastEthernet local connection

  6. MC & Data Processing Scheme Simu sig Simu bkg 1 Simu bkg 2 Fortran & G3 Or C++ & G3/G4/Fluka hits files sig hits files bkg 1 hits files bkg 2 SDigitizer SDigitizer SDigitizer Fortran & G3 Or C++ SDigits files sig SDigits files bkg SDigits files bkg Digitizer Raw Data Merged Digits files Reconstruct Reconstruct C++ ESD ESD

  7. General Architecture: Guidelines • Ensure high level of modularity (for easy of maintenance) • Absence of code dependencies between different detector modules (to C++ header problems) • The structure of every detector package is designed so that static parameters (like geometry and detector response parameters) are stored in distinct objects • The data structure is build up as ROOT TTree-objects • Access is possible to either the full set of correlated data (i.e., the event) or only one or more sub-sample, stored in different branches of the data structure (TBranch class) and corresponding to one or more detector.

  8. Computing Model

  9. Elements of a Computing Model • Components • Data Model • Event data sizes, formats, streaming • Data “Tiers” (DST/ESD/AOD etc) • Roles, accessibility, distribution,… • Calibration/Conditions data • Flow, latencies, update freq • Simulation, Sizes, distribution • File size • Analysis Model • Canonical group needs in terms of data, streams, re-processing, calibrations • Data Movement, Job Movement, Priority management • Interactive analysis • Implementation • Computing Strategy and Deployment • Roles of Computing Tiers • Data Distribution between Tiers • Data Management Architecture • Databases • Masters, Updates, Hierarchy • Active/Passive Experiment Policy • Computing Specifications • Profiles (Tier N & Time) • Processors, • Storage, • Network (Wide/Local), • DataBase services, • Specialized servers • Middleware requirements

  10. Calibration: 1 - 7 CPU’s MC Production: 11 - 33 CPU’s MC Reconstruction: 3 - 14 CPU/repr. Raw data reconstruction: 3 – 14 CPU/repr. Alignment: to be estimated Assume: Trigger rate: 20 Hz Beam Duty Cycle: 50% MC = data Calibration: 1Hz Estimates by scaling existing code and a C++ framework CPU Needed Very rough estimate • Optimal solution for reconstruction:take Beam Duty Cycle: 100% and use no-beam time for reprocessing. -> Double the CPU for reconstruction.

  11. Storage Needed Very rough estimate • Assume: • Raw data (compressed) event size: 120 kB • Hits (from MC): 9 kB/evt • Sdigit+Digit size (from MC): 30kB + 120kB • Track reference (from MC): 15 kByte • Kinematics (from MC): 4 kByte • ESD size (data or MC): 30 kByte From Alice data model • Storage required yearly (L3 trigger: 20Hz and DC: 50%): • Raw data: 12 Tbyte (+ calib) • Hits (from MC): 0.9 TByte • Digit size (from MC): 15 TByte • Track reference (from MC): 1.5 TByte • Kinematics (from MC): 0.4 TByte • ESD size (data or MC): 3/Tbyte /reproc.

  12. A ComputingModel for MEG • The crucial decisions are being taken at the collaboration level • Very dependent on CPU and storage requirement • Two degrees of complexity are being considered • INFN has requested a CM design by Nov 2005.

  13. Data Access: ROOT + RDBMS Model ROOT files Oracle MySQL Calibrations Event Store histograms Run/File Catalog Trees Geometries

  14. Computing Deployment: Implementation #1 Concentrated computing • Will be considered only in the case the CPU’s needed would exceed PSI’s capabilities • Easiest implementation • It requires the least manpower for maintenance • PROOF is essential for Analysis and DQM

  15. PSI – Tier 0 (PSI - Tier 1) 2.5 Gbps Z 622 Mbps Y X Tier 1 155 mbps 155 mbps Uni n 622 Mbps Lab a Tier2 Uni b Lab c   Department  Desktop Computing Deployment: Implementation #2 Distributed computing • Posting of the data/Catalog synchronization • Data distribution dependent upon connection speed • PSI -> INFN OK (3 MB/sec) • PSI -> Japan needs GRID  "Transparent" user access to applications and all data

  16. Tier0 Calib & Reco 1 ESD D A Q Raw EXPORT CACHE EXPORT CACHE EXPORT CACHE AOD Calib Tag Ana DPD Tier3-4 IMPORT CACHE Catalog IMPORT CACHE Catalog Catalog Processing Flow ESD Reco Reproc MC AOD Tag Tier1-2

  17. Computing Model Configuration

  18. Decide for single-Tier or multi-Tier CM (depends heavily on CPU+Storage needed) PSI is OK for Tier-0 (existing infrastructure + Horizon Cluster) Find the candidate sites for Tier-1 Requirement for a Tier-1 1 FTE for job running, control 0.5-1 FTE for data export, catalog maintenance and DQM If GRID is needed, 0.3-0.5 FTE for GRID maintenance Software installation would be responsibility of the Offline group (after startup) How We Will Proceed

  19. Computing Model Status • Started interaction with PSI’s Computing Center (J. Baschnagel and coll.) • PSI CC survey was very encouraging: • Physical space and electrical power not a limitation • However, CC is running near 80% of Cooling and Clean Power capabilities • Relevant services (backup, connection to exp. Area) are OK • Alternative options are being considered (setup the farm in the experimental area and use the CC only for backup) • Horizon Cluster • PSI stand well as Tier-0 candidate (in a MultiTier CM. • It might have enough resources to fulfill all MEG’s computing requirement

  20. Status of the Software

  21. Software Organization • Development of Montecarlo (G3+Fortran) and Calibration/Reconstruction (VMC+ROOT) will initially proceed in parallel • Migration will eventually occur later (end 2005) • Montecarlo group is in charge of the simulation (Geometry+Event Generator) • Offline group in charge of the rest (architecture, framework, reconstruction, computing, etc.) • Detector specific code developed along with Detector Experts • Present analyses for R&D within the ROME environment (same as Online)

  22. Status of the Montecarlo: DCH(P. Cattaneo) • Progress • Implementation of cable duct and various mechanical supports • Reorganization of the GEANT volume inside the chambers • First integration of the time to distance profile from GARFIELD • Next • Completing integration of GARFIELD profiles • Calculation of time profiles for signals • Effect on signals of electrode patterns • Electronic simulation • Signal digitization • Improving details cards, cables

  23. Status of the Montecarlo: EMCAL • Progress • Pisa model: implement new geometry • Pisa model: new tracking model (not based on GTNEXT) • Tokio model: GEANT based (GNEXT) photon tracking including • Refraction PMT quartz window • Refraction PMT holder • Tokio model: absorption • Tokio model: scattering • Tokio model: ZEBRA output • Number of p.e. in each PMT • PMT position (to be replaced by MySQL/ XML) • Tokio model: ZEBRA2NTuple converter • Next • Update geometry and complete integration between Pisa and Tokio • Signal digitization • ZEBRA output for hit timing

  24. Status of the Montecarlo: TOF • Progress • Implementation of the actual geometry: tapered slanted bar, square fibers • Addition of phototubes, photodiodes and related mechanics • Next • Generation of photons in the bars/fiber • Propagation of photons to the PMT & photodiodes • Electronics and PMT/ photodiodes simulation • Improving details of material distribution, e.g. better PMT, • cables • Add mechanical support • Signal digitization

  25. Status of the Montecarlo: more • Graphics display of geometry only (no event) • Some zooming capability • Possibility of displaying a single volume • Addition of phototubes, photodiodes and related mechanics • Beam, Target and Trigger • Preliminary approach outside GEM

  26. Softwares used in LP beam test(R. Sawada)

  27. BoR Event EoR Proceedure of data take calibration calibration data run number time trigger mode Configuration ID data data trigger mode channel info. geometry info. Histogram Tree (Ntuple) time # of events

  28. EoR Bofore a Run BoR Event Proceedure of Offline Analysis data trigger mode channel info. geometry info. calibration calibration run number processed data calibration

  29. Status of the Offline Project • Preliminary Offline architecture approved by MEG (June 2004) • INFN – CSN1 also approved the Project (Sept. 2004) • Funds have been provided for installing a minifarm in Lecce (22 kEUR) • Real work started November 2004

  30. DAQ Online Disk Server Prompt Calib Staging to tape Reco farm Immediate Tasks and Objectives • Setup a system to test the Offline code Test functionalities & performance RDBMS • Prototype the Main Program • Modules Classes • Steering program • FastMC Core Offline System Development

  31. Manpower Estimate Available (FTE) +1 waiting for funds

  32. Core Offline: Coordinator: Gatto Steering Program: Di Benedetto, Gatto FastMC: Mazzacane, Tassielli TS Implementation: Chiri Interface to raw data: Chiri, Di Benedetto, Tassielli Calibration Classes & Interface to RDBMS: Barbareschi* Trigger: Siragusa *Will start on Mar. 2005 Detector experts: LXe: Signorelli, Yamada, Savada TS: Schneebeli (hit), Hajime (Pattern), Lecce (Pattern) TOF: Pavia/Genova Trigger: Nicolo’ (Pisa) Magnet: Ootani The People Involved All <100% • Montecarlo: • LXe: Cattaneo, Cei, Yamada All 100% (except for Chiri) All <100%

  33. Milestones Offline Start-up: October 2004 Start deploying the minifarm: January 2005 First working version of the framework (FastMC + Reconstruction): April 2005(badly needed to proceed with the Computing Model) Initial estimate of CPU/storage needed: June 2005 Computing Model: November 2005 MDC: 4th quarter 2005 Montecarlo Include the calorimeter in gem Keep the existing MC in the Geant3 framework. Form a panel to decide if and how to migrate to ROOT: 4th quarter 2005

  34. Milestones Offline Start-up: October 2004 Start deploying the minifarm: January 2005 First working version of the framework (FastMC + Reconstruction): April 2005(badly needed to proceed with the Computing Model) Initial estimate of CPU/storage needed: June 2005 Computing Model: November 2005 MDC: 4th quarter 2005 Montecarlo Include the calorimeter in gem Keep the existing MC in the Geant3 framework. Form a panel to decide if and how to migrate to ROOT: 4th quarter 2005

  35. Conclusions • Offline project approved by MEG and the INFN • Offline group has consolidated (mostly in Lecce) • Work has started • Minifarm for testing the software is being setup • Definition of the Computing Model is under way • Montecarlo is on its way to a detailed description of the detector

  36. Backup Slides

  37. Compare to the others

  38. Data Model (Monarc) • ESD (Event Summary Data) • contain the reconstructed tracks (for example, track pt, particle Id, pseudorapidity and phi, and the like), the covariance matrix of the tacks, the list of track segments making a track etc… • AOD (Analysis Object Data) • contain information on the event that will facilitate the analysis (for example, centrality, multiplicity, number of positrons, number of EM showers, and the like). • Tag objects • identify the event by its physics signature (for example, a cosmic ray) and is much smaller than the other objects. Tag data would likely be stored into a database and be used as the source for the event selection. • DPD (Derived Physics Data) • are constructed from the physics analysis of AOD and Tag objects. • They will be specific to the selected type of physics analysis (ex: mu->e gamma) • Typically consist of histograms or nt-uple-like objects. • These objects will in general be stored locally on the workstation performing the analysis, thus not add any constraint to the overall data-storage resources

  39. The Run manager executes the detector objects in the order of the list DCH Detector Class TOF EMC Detector tasks Detector Class Detector Class Detector tasks Detector tasks Recostruction Structure Global Reco Run Manager MC Run Manager One or more Global Objects execute a list of tasks involving objects from several detectors Run Class Detector Class Run Class Detector tasks Each detector executes a list of detector tasks ROOT Data Base Tree Branches On demand actions are possible but not the default

  40. Responsibilities & Tasks (all software) • Detector experts: • LXe: Signorelli, Yamada, Savada • DC: Schneebeli (hit), Hajime (Pattern), Lecce (Pattern) • TC: Pavia/Genova • Trigger: Nicolo’ (Pisa) • Magnet: Ootani

  41. Manpower Estimate (framework only) ? Available at Lecce Job posted: apply now 4 from Naples + 2.2 from Lecce +1 waiting for funds

More Related