1 / 30

The Muon Conditions Data Management: Database Architecture and Software Infrastructure

The Muon Conditions Data Management: Database Architecture and Software Infrastructure . Monica Verducci University of Wuerzburg & CERN 5-9 October 2009 11th ICATPP Villa Olmo Como (Italy). Introduction @ the ATLAS Muon Spectrometer

alijah
Download Presentation

The Muon Conditions Data Management: Database Architecture and Software Infrastructure

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Muon Conditions Data Management: Database Architecture and Software Infrastructure Monica Verducci University of Wuerzburg & CERN 5-9 October 2009 11th ICATPP Villa Olmo Como (Italy)

  2. Introduction @ the ATLAS Muon Spectrometer Muon Spectrometer Data Flow: Trigger, Streams and Conditions Data Muon Conditions Database Storage and Architecture Software Infrastructure Applications and Commissioning test The Muon Conditions Data Management: Database Architecture and Software Infrastructure Outline

  3. The Muon Conditions Data Management: Database Architecture and Software Infrastructure The ATLAS Muon Spectrometer

  4. The Muon Conditions Data Management: Database Architecture and Software Infrastructure The ATLAS Detector

  5. The Muon Conditions Data Management: Database Architecture and Software Infrastructure The Muon Spectrometer • Three toroidal magnets create a magnetic field with: • Barrel:∫Bdl = 2 – 6 Tm • Endcaps:∫Bdl = 4 – 8 Tm • RPC &TGC: Trigger the detector and measure the muons in the xy and Rz planes with an accuracy of several mm. • CSC: Measure the muonsinRz with ~80 mm accuracy and in xy with several mm. Cover2<|m|<2.7 • MDT: Measure the muonsinRz with ~80 mm accuracy . Cover|m|<2

  6. The Muon Conditions Data Management: Database Architecture and Software Infrastructure The Muon Spectrometer II • Cathode-Strip Chambers (CSC) : 32 chambers, 31k channels • Monitored Drift Tube (MDT): 1108 chambers, 339k channels • Thin Gap Chambers (TGC): 3588 chambers, 359k channels • Resistive Plate Chambers (RPC): 560 chambers, 359k channels Trigger chambers Need a good resolution in the timing, pT and position measure to achieve the physics goals! Extremely fine checks of all the parts of each subdetector! Huge amount of information… Precision chambers

  7. The Muon Conditions Data Management: Database Architecture and Software Infrastructure Data Flow and Conditions Data

  8. The Muon Conditions Data Management: Database Architecture and Software Infrastructure Pathological ATLAS Event Flow Primary Event Selection TRIGGER Output Streams Express Calibration & Align Detector Parameters Non event data 109 events/s =>1GHz 1 event~ 1MB (~PB/s) subset Configuration DB Conditions DB Hierarchical trigger system ~MB/sec ~PB/year raw data Offline reconstruction ATHENA

  9. A typical ATLAS “Non-Event data” could be a: • Calibration and Alignment data (from express and calibrationstreams for a total data rate of about 32MB/s, dominated by the inclusive high pt leptons (13% EF bandwidth= 20Hz of 1.6MB events). RAW Data -> 450 TB/year. More streams are now subsumed into the express stream) • PVSS Oracle Archive, i.e. the archive for the DCS « slow controlsystem » data, and DAQ via OKS DB. • Detector configuration and connectivitydata, specificsubdetectordata Mainly used for: • Diagnostic by detector experts • Geometry, DCS • Sub-Detector hardware and software • Data defining the configuration of the TDAQ/DCS/subdetector hardware and software to be used for the following run • Calibrations and Alignment • Event Reconstruction and analysis • Conditions data The Muon Conditions Data Management: Database Architecture and Software Infrastructure The MUON “Non Event Data”

  10. ~PB/sec Detectors 40 MHz 25ns Front end pipelines LVL 1 105 Hz µsec Readout buffers LVL 2 ms Switching network 103 Hz Processor farms LVL 3 102 Hz sec ~MB/sec Muons Conditions Data from Trigger Streams The Muon Conditions Data Management: Database Architecture and Software Infrastructure • The ATLAS Trigger will produce 4 streams (200Hz, 320 MB/s): • Primary stream (5 streams based on trigger info: e,m,jet) • Calibration and Alignment Stream (10%) • Express Line Stream (Rapid processing of events also included in the Primary Stream 30 MB/s, 10%) • Pathological events (events not accepted by EF) Some Conditions Muon Data will be produced by detector analysis performed on the: • Calibration and Alignment Stream • Muons are extracted from the second level trigger (LVL2) at a rate of ~1 KHz, data are streamlined to 3 Calibration Centres(Ann Arbor, Munich, Rome (from to Naples for RPCs) • 100 CPUs each, ~1 day latency for the full chain • Express Stream 10

  11. Calibration for the precision chambers Alignment from sensors and from tracks Efficiency flags for the trigger chambers Data Quality flags (dead / noisy channels) and final status for the monitoring Temperature map, B field map DCS information (HV,LV,gas…) DAQ run information (chamber initialized) SubDetector Configuration parameters (cabling map, commissioning flags…) The Muon Conditions Data Management: Database Architecture and Software Infrastructure Muon Conditions Data Calibration Stream & offline algo (express stream) Analysis algorithms Hardware Sensor OKS2COOL and PVSS2COOL Constructor parameters

  12. There are different Database storage solution to deal the different hardware and software subdetector work point. • Hardware Configuration DB • Oracle private DB, architecture and maintenance under detector’s experts • Calibration & Alignment DB • Oracle private DBs, one for the MDT Calibration (replicated in three centers) and one for the Alignment sensors. • Condition DB • Contains a subset and less granularity information • Cool Production DB The Muon Conditions Data Management: Database Architecture and Software Infrastructure Storage of the ‘non-event’ data

  13. The Conditions data are non-event data that could: • Vary with time • May exist in different versions • Data coming from both offline and online • The Conditions DB is mainly accessed by the ATLAS offline reconstruction framework (ATHENA) • Conditions Databases are distributed world-wide (for scalability) • accessed by an “unlimited” number of computers on the Grid: simulations jobs, reconstruction jobs, analysis jobs,… • Within ATLAS, the master conditions database is at CERN and using Oracle replica mechanism will be available in all Tier-1 centers • The technology used in the Conditions DB is an LCG product: COOL (COnditions Objects for LHC) implemented using CORAL The Muon Conditions Data Management: Database Architecture and Software Infrastructure Conditions DataBase

  14. The interface provided by COOL: • LCG RelationalAccessLayer software which allows database applications to be written independently of the underlying database technology (Oracle, in MySQL or in SQLite). • COOL provides a C++ API, and an underlying database schema to support the data model. • Once a COOL database has been created and populated, it is possible for users to interact with the database directly, using lower-level database tools • COOL implements an interval of validity database • Database schema optimized for IOV retrieval & look-up • objects stored or referenced in COOL have an associated start and end time between which they are valid. • times are specified either as run/event, or as absolute timestamps in agreement with the meta-data stored. The Muon Conditions Data Management: Database Architecture and Software Infrastructure Cool Interface for Conditions Database

  15. COOL data are stored in folders (tables) Database = set of folders Within each folder, several objects of the same type are stored, each with their own interval of validity range COOL folders can be SingleVersion: only one object can be valid at any given time value DCS data, where the folder simply records the values as they change with time MultiVersion: several objects can be valid for the same time, distinguished by different tags calibration data, where several valid calibration sets may exist for the same range of runs (different processing pass or calibration algorithm) The Muon Conditions Data Management: Database Architecture and Software Infrastructure

  16. The Muon Conditions Data Management: Database Architecture and Software Infrastructure Muon Spectrometer examples DCS: Temperature or HV values depends on the IoV and are relative simple and small  Inline Payload Calibration Data and Alignment: Parameters with a high granularity, more parts can give the same IoV CLOB Payload Since Until Ch Id Payload Tag Evt10 Run1 Evt20 Run10 1 1 HV DCS LV Since Until Ch Id Payload Tag Evt10 Run1 Evt20 Run10 1 T0 CLOB Cosmics M4

  17. T0 T1 T0 T1 P1 Oracle Stream Oracle Stream ATONR COOL ATONR COOL ATONR COOL Oracle Stream ATR COOL ATR COOL The Muon Conditions Data Management: Database Architecture and Software Infrastructure Conditions Servers and Schema • Each subdetector (MDT, RPC, CSC, TGC) has a different schema necessary because of the options introduced by the Oracle Streams architecture used to replicate data from the ATLAS online server (ATONR) to the ATLAS offline server in the CERN computer centre (ATLR) and on to the servers at each of the Tier-1 sites. The different schema are as follows: • Schema ATLAS_COOLONL_XYZ which can be written on the online server, and is replicated to offline and Tier-1s. • Schema ATLAS_COOLOFL_XYZ which can be written on the offline server, and is replicated to Tier-1s. • Each schema is associated with several accounts: • A schema owner account • A writer account ATLAS_COOLONL_XYZ_W is used for insert and tag operations, • A generic reader account ATLAS_COOL_READER is used for read-only access to all the COOL schema and can be used on online, offline and Tier-1 servers.

  18. We have several sources of conditions data (online pit, offline, calibration stream) The software analysis and the publishing interface are in general different, depending on the framework where the analysis code works The final mechanism is unique, every insertion passes via a sqlite file and, after checking, stored in the official production DB, using python scripts. The service names are defined in the Oracle tnsnames.ora file, and the connection/password name are handled automatically in the authentication.xml file. The Muon Conditions Data Management: Database Architecture and Software Infrastructure Writing in the Conditions DB

  19. Unique interface inside the reconstruction package: MuonConditionsSummarySvc • Every subdetector part has its own code to handle the proper Conditions Data(DCS, Status flags, ...)using a XYZConditionsSummarySvcwhich initializes several tools: • DCSConditionsTool, • DQConditionsTool,... • Using the IoVService the information used in the reconstruction algorithms is always “on time” with the current event processed The Muon Conditions Data Management: Database Architecture and Software Infrastructure MuonSoftware Interface

  20. The Muon Conditions Data Management: Database Architecture and Software Infrastructure Access by the Reconstruction Access to COOL from Athena is done via the Athena IOVDbSvc (provides an interface between conditions data objects in the Athena transient detector store (TDS) and the conditions database itself). Reading event data, the IOVDbSvc ensures that the correct Cond Data Obj are always loaded into the Athena TDS for the event currently being analyzed. 1) Begin Run / Event IOVSvc Service 7) Set IOV 3)Callback 2) Delete Objects (expired) 5) Update Address Algorithm or AlgTool IOVDbSvc Service CondObjColl ref Detector Store Service Address 4)Retrieve CondObjColl Time Payload 6) Get Payload IOV BD CondObj Collection CondObj Transient Detector Store

  21. The Muon Conditions Data Management: Database Architecture and Software Infrastructure Commissioning and Tests • Tests of all the chain, transfer of data (streaming), access to the data in reconstruction job have been carried out • The cosmics data have been stored successfully (in particular alignment and calibration info) • The Muon data replica and access have been tested inside the overall ATLAS test with some dummy data: • The production schema for ATLAS have been replicated from the online RAC ATONR to ATLR and then on to the active Tier-1 sites • Tests on the access by ATHENA and on the replica/transfer data between Tier1 and Tier0 have been done, good performance @ Tier1 (~200 jobs in parallel – tests done in 2006). • Tests of the Muon Data Access by Reconstruction partially done without any problems

  22. The MuonDataBase layout has been defined and tested. The Access and architecture for most of the Muon Conditions Data have been extensively tested. The Muon data replica and access during the ATLAS test have been positive The Muon Conditions Data Management: Database Architecture and Software Infrastructure Conclusions

  23. The Muon Conditions Data Management: Database Architecture and Software Infrastructure Backup

  24. The Muon Conditions Data Management: Database Architecture and Software Infrastructure Muon Calibration Stream • Muons in the relevant trigger regions are extracted from the second level trigger (LVL2) at a rate of ~1 KHz • Data are streamlined to 4 Calibration Centers • Ann Arbor, Munich, Rome and Naples for RPCs • 100 CPUs each • The stream is useful also for Data Quality Assessment, alignment with tracks and trigger efficiency studies • ~1 day latency for the full chain • From data extraction, to calibration computation at the Centers, to writing the calibration constants in the Conditions DB at CERN • Need to carefully design the data flow and the DB architecture

  25. The Muon Conditions Data Management: Database Architecture and Software Infrastructure Calibration and Alignment Stream Production of NON-EVENT DATA in different steps, used for the event recostruction • Input Raw Data can come from the event stream or be processed by the sub-detector read-out system. RODs level (not read-out by the standard DAQ path, not physics events: pulse signal) • At the event filter level (standard physics events: Z ->ee, muon sample, etc.) • After the event filter but before the “prompt reconstruction” • Offline after the “prompt reconstruction” (Z ->ee, Z+jet, ll )

  26. The Muon Conditions Data Management: Database Architecture and Software Infrastructure • Calibration Stream (technical detector stream) • An Inner Detector Alignment Stream (100 Hz reco tracks info 4kB) • A LAr electromagnetic calorimeter stream (50Hz of inclusive electrons pt > 20 GeV up to 50kB) • A muon calibration stream (Level1 trigger region ~10kHz for 6kB) • Isolated hadron (5Hz for 400 kB) • Express Stream (processed promptly, i.e. within < 8 hours) • contain all calibration samples needed to extract calibration constants before the 1st-pass reconstruction of the rest of the data: Z ll, pre-scaled W  l, tt, etc. • Inclusive high-pt electrons and muons (20Hz with full event read-out 1.6MB) • These streams sum to a total data rate of about 32MB/s, dominated by the inclusive high pt leptons (13% EF bandwidth= 20Hz of 1.6MB events). RAW Data -> 450 TB/year. More streams are now subsumed into the express stream

  27. The Muon Conditions Data Management: Database Architecture and Software Infrastructure Some numbers: CondDBdesign • ATLAS daily reconstruction and/or analysis job rates willbe in the range from 100k to 1M jobs/day • For each of tenTier-1 centersthat corresponds to the Conditions DB access rates of 400- 4000 jobs/hour • Each reconstruction job willread 10-100MB of data • Atlas requests to Tier-1s is a 3-node RAC cluster dedicated to the experiment. • Expected rate of data flow to Tier-1s isbetween 1-2 GB/day

  28. The Muon Conditions Data Management: Database Architecture and Software Infrastructure Workload scalability results (2006) Vaniachine @ CHEP In ATLAS we expect 400 to 4,000 jobs/hour for each Tier1 For 1/10th of the Tier1 capacities that corresponds to the rates of 200 to 2,000 jobs/hour Good Results! For the Access by Athena we have obtained 1000 seconds per job events at ATLR due to the DCS and TDAQ schema access!

  29. ATLAS Detector Front- End VMECrate RODs Level1 Trigger ROSs • DCS • Detector Con. Sys. • HV, LV • Temperature • Allingment Level2 Trigger Event Filter Conditions Database Configuration Database ByteStream Files ATHENA code The Muon Conditions Data Management: Database Architecture and Software Infrastructure • Databases are organized collections of data • Organized according to a certain data model • The data model defines not only the structure but also which • operations can be performed Manual Input TCord db ROD HLT/DAQ DCS System Online Calib. farm CONDITIONDB CONFIGURATION DB Geom. Monitordata Geom. Setup Setup DCS Calib Calib ROD HLT/ DAQ DCS System Monitor queries Reco. farms Offline analysis 29 M.Verducci

  30. Muon Spectrometer Strategy for muon PID The Muon Conditions Data Management: Database Architecture and Software Infrastructure Alignment Accuracy testbeam results • Eµ~ 1TeV -> Δ~500µm • /m ~10% -> δΔ~50µm • Alignment accuracy ~30µm • B Field accuracy |ΔB| ~ 1-2mT • Dilepton resonances (mostly Z) sensitive to: • Tracker-spectrometer misalignment • Uncertainties on Magnetic field • Detector momentum scale • Width is sensitive to muon momentum resolution • Calibration Samples in 100pb-1: • J/ ~1600k (+~10% ’) •  ~300k (+~40% /) • Z ~60k

More Related