1 / 12

The Athena Control Framework in Production, New Developments and Lessons Learned

The Athena Control Framework in Production, New Developments and Lessons Learned. September 27 2004 C. Leggett , P. Calafiura, W. Lavrijsen, M. Marino, D. Quarrie. Athena and Gaudi. Gaudi framework shared by LHCb, ATLAS, GLAST, HARP, and OPERA

jed
Download Presentation

The Athena Control Framework in Production, New Developments and Lessons Learned

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Athena Control Framework in Production, New Developments and Lessons Learned September 27 2004 C. Leggett, P. Calafiura, W. Lavrijsen, M. Marino, D. Quarrie

  2. Athena and Gaudi • Gaudi framework shared by LHCb, ATLAS, GLAST, HARP, and OPERA • Based on a modular component architecture, structured around dynamically loadable libraries • Maintains a separation between transient and persistent layers, allowing new technologies to replace aging ones with minimal impact on the end user • Athena comprises the ATLAS specific extensions to Gaudi, most notably: • Storegate – the data store • Interval Of Validity Svc – managing time dependent data • Pileup – combining multiple events • HistorySvc – maintaining a multi-level record • Python scripting

  3. Athena and Gaudi Application Manager Converter Converter Sequencer Converter Event LoopMgr Event Store Data Files Message Service Persistency Service H H H H H StoreGateSvc JobOptions Service Algorithm Algorithm T T T D D Particle Prop. Service Data Files D D StoreGateSvc D D Persistency Service D D D Detector Store Other Services Auditors Histogram Service Scripting Service

  4. Athena in Production • Data Challenge II: • phase 1: • ~30 Physics channels, 10s of millions of events • several million calibration events • currently producing raw data in a distributed worldwide environment using Grid • phase 2: reconstruction and real time distribution of data to tier 1 institutes • phase 3: worldwide distributed analysis on the Grid • Combined TestBeam • taking data since July in various configurations • ~5000 runs, > 1Tb data written • G4 simulation and reconstruction of CTB setup occurring in parallel • conditions databases in production, both read and write. • preparing for phase II: massive reconstruction of all real data and production of MC data.

  5. Combined Test Beam Setup

  6. Interval of Validity Service • Makes associations between user data and time dependent data that resides in specialized conditions databases • Transparent to user • Data only read from persistent layer when it is used. Validity interval information is separate from data • Hierarchical callback functions can be associated with time dependent data such that they are triggered when data enters a new interval of validity • Validity interval information and time dependent data can be preloaded on a job or run basis for trigger or testbeam situations where database access is unwanted

  7. Access to Time Varying Data • Maintains separation of transient and persistent layers • Testbeam environment making good use of IOVService for: • Slow control • Calibration

  8. Detector Pileup in DC2 • Overlay ~1000 min bias events to original physics stream • Requirement: digitization algorithms should run unchanged • Tuple event iterator: manage multiple input streams • Select random permutations from a circular buffer of min-bias events • Memory optimization: requirement total job size < 1GB 2-dim detector and time-dependent event caching • Stress test architecture flexibility • Excellent tool to expose memory leaks (they become x1000 bigger)

  9. History • Provenance of data must be assured • User selection of data based on its history. • full history of generation and processing recorded and associated with all data • Important in analysis to know complete source of data, and all cuts applied • History Service keeps track of • Environment • Job configuration • Services • Algorithms, AlgTools, SubAlgorithms • DataObjects

  10. Python Based Scripting Interface • Python woven into the framework, replacing flat text configuration files • dynamic job configuration • conditional branching • detFlags • interactive analysis • data object access and manipulation • connection to ROOT histogramming facilities • object type info for dictionaries and persistency

  11. Detector Configuration Matrix -+- 28 A DetFlags.Print() : pixel SCT TRT em HEC FCal Tile MDT CSC TGC RPC Truth LVL1 detdescr : ON ON ON -- -- -- ON -- -- -- -- ON -- digitize : ON ON ON -- -- -- ON -- -- -- -- ON -- geometry : ON ON ON -- -- -- ON -- -- -- -- ON -- haveRIO : -- -- -- -- -- -- -- -- -- -- -- -- -- makeRIO : -- -- -- -- -- -- -- -- -- -- -- -- -- pileup : ON ON ON -- -- -- -- -- -- -- -- ON -- readRDOBS : -- -- -- -- -- -- -- -- -- -- -- -- -- readRDOPool : ON ON ON -- -- -- ON -- -- -- -- ON -- readRIOBS : -- -- -- -- -- -- -- -- -- -- -- -- -- readRIOPool : -- -- -- -- -- -- -- -- -- -- -- -- -- simulate : -- -- -- -- -- -- -- -- -- -- -- -- -- writeBS : -- -- -- -- -- -- -- -- -- -- -- -- -- writeRDOPool : ON ON -- -- -- -- ON -- -- -- -- ON -- writeRIOPool : -- -- -- -- -- -- -- -- -- -- -- -- --

  12. Lessons Learned • Design for Performance • pileup excellent testbed • database access can be problematic • Design for Persistency • various container classes rewritten with persistency in mind • ClassID Service to globally monitor objects • Better support for realtime monitoring • essential for proper testbeam studies

More Related