1 / 21

COMPASS off-line computing

COMPASS off-line computing. the COMPASS experiment the analysis model the off-line system hardware software. The COMPASS Experiment (C ommon M uon and P roton A pparatus for S tructure and S pectroscopy). Fixed target experiment at the CERN SPS

Download Presentation

COMPASS off-line computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. COMPASS off-line computing the COMPASS experiment the analysis model the off-line system hardware software

  2. The COMPASS Experiment(Common Muon and Proton Apparatus for Structure and Spectroscopy) Fixed target experiment at the CERN SPS • approved in February 1997 • commissioning from May 2000 • data taking for at least 5 years • collaboration about 200 physicists from Europe and Japan • diversified physics programme • muon beam gluon contribution to nucleon spin, quark spin distribution functions • hadron beam glueballs, charmed baryons, Primakoff reactions all measurements require high statistics

  3. Experimental Apparatus • Two stage spectrometer(LAS, SAS) Several new detectors GEMs, microMega, straw trackers, scintillating fibers, RICH, and silicon detectors, Calorimeters, Drift and MWP Chambers (440 K electronic channels) Not an easy geometry highly inhomogeneous magnetic field(SM1, PTM)

  4. Expected Rates beam intensity: 108 muons/s with duty cycle of 2.4s/14s RAW event size: ~ 20 - 30 kB trigger rate: 104events/spill DAQ designed for 105 events/spill (hadron programme) on-line filtering continuousdata acquisition flux: 35 MB/s data taking period ~100 days/year: ~ 1010 events/year, ~ 300 TB/year of RAW data

  5. COMPASS analysis model • The RAW data • will be stored at CERN (no copy foreseen) and have to be accessible during all the experiment lifetime • will be processed at CERN, in parallel to and at the same speed of data acquisition assuming no pre-processing for calibrations ~ 1 reprocessing of the full data set processing time 2 SPECint95-sec/event • calibrations “on-line”, powerful on- off-line monitoring, small data subset reprocessing if fast raw data access the needed CPU power is 2000 SpecInt95 (~ 20 000 CU) • Physics analysis will be performed at the home institutes, • as well as specific studies and MC production • the relevant sets of data must have a much smaller sizeremote and concurrent access to raw data important (“PANDA” model…)

  6. General choices • In 1997 COMPASS decided to • build a completely new software system • use OO techniques • C++ as programming language • ODB to store the data. • Given the short time scale, the ‘small’ collaboration, the novelty, and the well known difficulty of the tasks, • it was mandatory to • collaborate with the IT division • foresee the use LHC++ and commercial products (HepODBMS, Objectivity/DB) • look at other developments (ROOT)

  7. Off-line system • Hardware • central data recording • COMPASS Computing Farm (CCF) (see M. Lamanna presentation, Feb. 7, session E) • Software • Data structures and access • CORAL (Compass Reconstruction and AnaLysis) program

  8. Central data recording (CDR) • updated version of the CERN Central Data Recording (CDR) scheme the on-line system • performs the event building (and filtering)- ALICE DATE system • writes RAW data on local disks files in byte-stream format (10-20 parallel streams), keeping a "run" structure (typical sizes of 50 GB) the Central Data Recording system • transfers the files to the COMPASS Computing Farm, at the computer center(rate of 35 MB/s) • the COMPASS Computing Farm CCF • formats the data into a federated database (Objectivity/DB) converts the RAW events in simple persistent objects • performs fast event tagging or clusterisation (if necessary) • sends the DB to HMS for storage

  9. COMPASS Computing Farm (CCF) • Beginning 1998: • IT/PDP Task Force: computing farms for high-rate experiments (NA48, NA45, and COMPASS). • Proposed model for the CCF: hybrid farm with • about 10 Proprietary Unix servers (“data servers”) • about 200 PCs(”CPU clients”), 2000 SPECint95 (0.2 s/ev) • 3 to 10 TB of disk space • Present model: farm with • PCsas “data servers” and”CPU clients” order of 100 dual PIII machines standard PCs running CERN certified Linux(now: RH5.1 with kernel 2.2.10/12)

  10. CCF

  11. COMPASS Computing Farm (cont.) • The data servers will • handle the network traffic from the CDR, • format the RAW events into a federated DB , and send them to the HSM and • receive the data to be processed from the HSM, if needed, • distribute the RAW events to the PCs for reconstruction • receive back the output (persistent objects), and send it to the HSM. • The CPU clients will • process the RAW events (reconstruction of different runs/files has to run in parallel) a real challenge:1000 ev/sec to be stored and processed by 100 dual PCs • tests with prototypes are going on since two years; good results

  12. Off-line system • Software • Data structures • Event DB • Experimental conditions DB • Reconstruction quality control monitoring DB • MC data • CORAL:Compass Reconstruction and AnaLysis program

  13. direct access to objects run: file structure not visible association to avoid duplications direct: raw - reco. data via “time”: raw - mon. ev Event DB event headers containers: small dimensions (on disk), basic information like tag, time,... RAW event containers:just one object with event (DATE) buffer  reconstructed data containers:objects for physics analysis Data structures

  14. Data structures (cont.) • Experimental conditions DB • includes all information for processing and physics analysis (on-line calibrations, geometrical description of the apparatus...) • based on CERN porting of BABARCondition Database package (included in HepODBMS) • versioning of  objects • access to valid information using event time • Reconstruction quality control monitoring data • includes all quantities needed for monitoring the stability of the reconstruction and of the apparatus performances • stored in Objectivity/DB • Monte Carlo data • we are using Geant3(Geant4: under investigation, not in the short term) • ntuples, Zebra banks

  15. status • Event DB version 1 ready • Experimental conditions DB in progress: implementation started • Reconstruction quality control monitoring data starting • Monte Carlo data ready

  16. CORALCOmpass Reconstruction and AnaLysis program fully written in C++, OO technique modular architecturewith • a framework providing all basic functionalities • well defined interfaces for all components needed for event reconstruction • insulation layers for • all "external" packages • access to the experimental conditions and event DB (reading and writing persistent objects) - HepODBMS to assure flexibility in changing both reconstruction components and external packages • components for event reconstruction developed in parallel detector decoding, pattern rec. in geom. regions, track fit, RICH and Calorimeter rec., …

  17. CORAL

  18. CORAL status development and tests on Linux • we try to keep portability on other platforms (Solaris) • framework:almostreadywork going on to interface new reconstruction componentsand access to experimental conditions DB • reconstruction components: • integrated inside CORAL and testedMC event reading and decoding, track pattern recognition, track fit,… • integration foreseen soonRICH pattern recognition, Calorimeter reconstruction, vertex fit,... • under developmentdetector (DATE buffer) decoding, in parallel with on-line,... • Goal: version 1 ready at the end of April 2000 all basic functionalities, even if not optimised as for all other off-line system components

  19. General comments most of the problems we had are relatedto the fact that we are still in a transition period: • no stable environment both for available software (LHC++) and OS (Linux) • lack of standard “HEP made” tools and packages commercial products seem not to be always a solution • too few examples of HEP software systems using new techniques • expertise and resources having a large number of physicists knowing the new programming language (and techniques) requires time all the work has been done by a very small enthusiastic team (3 to 10 fte in 2 years) Still, we think we made the right choice

  20. from the minutes of the 16th meeting of FOCUS • held on December 2, 1999: • “FOCUS …. recognises the role that the experiment has • as a "test-bed" for the LHC experiments.”

More Related