1 / 16

Offline Software for the ATLAS Combined Test Beam

Offline Software for the ATLAS Combined Test Beam. Ada Farilla – I.N.F.N. Roma3 On behalf of the ATLAS Combined Test Beam Offline Team: M. Biglietti, M. Costa, B. Di Girolamo, M. Gallas, R. Hawkings, R. McPherson, C. Padilla, R. Petti, S. Rosati, A. Solodkov.

khuyen
Download Presentation

Offline Software for the ATLAS Combined Test Beam

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Offline Software for the ATLAS Combined Test Beam Ada Farilla – I.N.F.N. Roma3 On behalf of the ATLAS Combined Test Beam Offline Team: M. Biglietti, M. Costa, B. Di Girolamo, M. Gallas, R. Hawkings, R. McPherson, C. Padilla, R. Petti, S. Rosati, A. Solodkov Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla

  2. ATLAS Combined Test Beam (CTB) • A full slice of the ATLAS detector (barrel) has been installed on the H8 beam line of the SPS in the CERN North Area: beams of e/p/m/g will be delivered in the range 1-350 GeV • The data taking has started this year on May 17th and will continue until mid November 2004: ATLAS is one of the main users of the SPS beam in 2004 • The setup spans more that 50 meters and is enabling the ATLAS team to test the detector’s characteristics long before LHC starts • Among the most important goals for this project: • Study the detector performance of an ATLAS barrel slice • Calibrate the calorimeters at a wide range of energies • Gain experience with the latest offline reconstruction and simulation packages • Collect data for a detailed comparison data-MonteCarlo • Gain experience with the latest Trigger and Data Acquisition packages • Test the ATLAS Computing Model with real data • Study commissioning aspects (i.e. integration of many sub-detectors, test the online and offline software with real data, management of conditions data) Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla

  3. ATLAS CTB: Layout • The setup includes elements of all the ATLAS sub-detectors: • Inner Detector: Pixel, Semi-Conductor Tracker (SCT), Transition Radiation Tracker (TRT) • Electromagnetic Liquid Argon Calorimeter • Hadronic Tile Calorimeter • Muon System: Monitored Drift Tubes (MDT), Cathode Strip Chambers (CSC), Resistive Plate Chambers (RPC), Thin Gap Chambers (TGC) • The DAQ includes elements of the Muon and Calorimeter Trigger (Level 1 and Level 2 Triggers) as well as Level 3 farms • A dedicated internal Fast Ethernet Network deals with controls .A dedicated internal Gigabit Ethernet Network deals with the data which are stored on the CERN Advanced Storage Manager (CASTOR) • About 5000 runs (50 Millions events) already collected so far, >1TB of data stored Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla

  4. ATLAS CTB: Layout The ATLAS detector CTB Layout as seen by the G4 Simulation Incoming Beam About two hundreds people involvedin the effort (physicists, engineers and technicians) with about 40 people in four support groups ensuring the smooth running of the detectors and DAQ Partial view of the setup in the H8 area Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla

  5. ATLAS CTB: Offline Software • The CTB Offline Software is based on the standard ATLAS software, with adaptations due to the different geometry of the CTB setup w.r.t the ATLAS geometry. Very little CTB-specific code was developed, mostly for monitoring and sub-detectors’ code integration. • About 900 packages now in the ATLAS offline software suite, residing in a single CVS repository at CERN. Management of package dependencies libraries and executable building is performed with CMT. • The ATLAS software is based on the Athena/Gaudi framework, which embodies a separation between data and algorithms. Data are handled through an Event Store for event information and a Detector Store for condition information. Algorithms are driven through a flexible event loop. Common utilities (i.e. magnetic field map) are provided trough services. PYTHON JobOptions scripts allow to specify algorithms and services parameters and sequencing. Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla

  6. ATLAS CTB: Software Release Strategy • New ATLAS releases are built approx. every three weeks, following a predefined plan for software development • New CTB releases are built every week following a CVS-branch of the standard release. • During test bean operation, a reason of concern has been to find a compromise • between two conflicting needs: the need of new functionalities and the need of • stability. The strategy to achieve that was discussed during daily/weekly • meetings: a good level of communication between all the involved parties helped in • solving problems and take good decisions. Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla

  7. CTB layout H8 (2004) Inner Region Muon Region Calorimeter Region Y X Z ATLAS CTB: Simulation • Simulation of the CTB layout is performed in a flexible way by a package (CTB_G4Sim) similar to the ATLAS G4 Simulation package (same code as for the full ATLAS detector) • Python JobOption scripts and macro files allow to select among different configurations (i.e. with and without magnetic field, different positions of the sub-detectors on the beamline, different rotation of the calorimeters w.r.t. the y axis, etc) • We are in pre-production mode: about 500.000 events already produced for validation studies The detailed comparison data-MonteCarlo is expected to be one of the major outcome of the physics analysis Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla

  8. ATLAS CTB: Reconstruction • Full reconstruction is performed in a package (RecExTB) that integrates all the sub-detectors’ reconstruction algorithms, through the following steps: • Conversion of raw data from the online format to their Object representation • Access to conditions data • Access to ancillary information (i.e. scintillators, beam chambers,trigger patterns) • Execution of sub-detector reconstruction • Combined reconstruction across sub-detectors • Production of Event Summary Data and ROOT Ntuples • Production of xml files for event visualization using the Atlantis Event Display • From the full reconstruction of real data we are learning a lot on calibration • and alignment procedures and on the performances of new reconstruction algorithms. • Moreover, we expect to gain experience in exercising the available tools for Physics • Analysis, aiming at their wide-spread use in a community which is much bigger than the • one restricted to software developers: non negligible by-product from an educational • point of view Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla

  9. Events with a track in the Muon system E(TileCal) ATLAS CTB: (very) preliminary results Energy in Tile Cal for events with/without a muon track Muon Z0 (mm) ID Z0 (mm) Energy in e.m. LAr Cal for 250 GeV electrons Z0 parameter for tracks in the Inner Detector (Pixels+TRT) and in the Muon system Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla

  10. ATLAS CTB: Detector Description • The detailed description of the detectors and of the experimental set-up is a key issue for the CTB software, along with the possibility to handle in a coherent way different experimental layouts (versioning). • The NOVA MySQL database is presently used for storing detector description information and in few weeks this information will also be fully available in ORACLE • The new ATLAS detector description package (GeoModel) has also been extensively used both for simulation and reconstruction. Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla

  11. ATLAS CTB: Conditions Database Three components: • Lisbon Interval-Of-Validity Database (IoVDB, MySQL implementation) • Data stored internally as blobs (XML strings) or tables • Or as external ref to POOL or NOVA Database • POOL for large calibration objects • NOVA for Liquid Argon information The management of large volumes of conditions data (i.e. slow control data, calibration and alignment parameters, run parameters) is a key issue for the CTB . We are now using an infrastructure that will be of help in defining the long term ATLAS solution for the conditions database Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla

  12. Test DBs NOVA CondDB CTB DBs OBK DBs POOL cat OBK DBs POOL cat NOVA CondDBB CTB DBs ATLAS CTB: Conditions Database Data aquisition programs Online server (atlobk01) Offline Bookkeeping Database DB replication • Replication using mySQL binary log • Unidirectional only onlineoffline Browsing, Athena applications Offline server (atlobk02) Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla

  13. ATLAS CTB: Production Infrastructures • GOAL: we have to deal with a big data set (more than 5000 runs, 107 events) and be able to perform many reprocessings in the next months. A production structure is in place to achieve that in short time and in a centralized way. • A sub-set of the information from the Conditions Database is copied to the bookkeeping database AMI (ATLAS Metadata Interface) : run number, file name, beam energy, etc . Other run information is entered by the Shifter. • AMI (Java application) contains a generic read-only web interface for searching. It is directly interfaced to AtCom (ATLAS Commander), a tool for automated job definition, submission and monitoring, developed for the CERN LSF Batch System, widely used during the “ATLAS Data Challenge 1” in 2003 Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla

  14. ATLAS CTB: Production Infrastructures • AMI, AtCom and a set of scripts represent the “production infrastructure” presently used both for real data reconstruction and for MonteCarlo pre-production done at CERN, with Data Challenge 1 tools. • The “bulk” of MonteCarlo production will be performed on Grid (“continuous production”), using the infrastructure presently used for running the “ATLAS Data Challenge 2” Both production infrastructures will be used in the coming months, operated by two different production groups Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla

  15. ATLAS CTB: High Level Trigger (HLT) • HLT = combination of Level2 and Level3 Triggers • HLT at the CTB as a service: • It provides the possibility to run “high-level monitoring” algorithms executed in the Athena processes running in the Level3 farms. It is the first place, in the data flow, where to establish correlations between objects reconstructed in the separate sub-detectors • HLT at the CTB as a client: • Complex selection algorithms are under test with the aim to have two full slices (one slice for electrons and photons, one slice for muons) LVL1 -> LVL2 -> LVL3 and producing separate outputs in the Sub Farm Output for the different selected particles Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla

  16. Conclusions • The Combined Test Beam is the first opportunity to exercise the “semi-final” ATLAS software with real data in a complex setup • All the software has been integrated and is running successfully, including the HLT: centralized reconstruction has already started on limited data samples and preliminary results are already available, showing good quality • Calibration, alignment and reconstruction algorithms will continue to improve and to be tested with the largest data sample ever collected in a test beam environment (O 107 events) • It has been a first, important step towards the integration of different sub-detectors and of people from different communities (software developers and sub-detector experts) Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla

More Related