1 / 34

HLT/DAQ Status report

CSN1 April 2005. HLT/DAQ Status report. Valerio Vercesi. Outline. New TDAQ Organization Italian activities and roles Pre-series procurements Status, deployment Documentation Activities Combined Test Beam results Monitoring and ROD Crate DAQ Algorithms development Planning and outlook

elgin
Download Presentation

HLT/DAQ Status report

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CSN1 April 2005 HLT/DAQ Status report Valerio Vercesi

  2. Outline • New TDAQ Organization • Italian activities and roles • Pre-series procurements • Status, deployment • Documentation • Activities • Combined Test Beam results • Monitoring and ROD Crate DAQ • Algorithms development • Planning and outlook • Systems commissioning • Cosmic data taking V. Vercesi - INFN Pavia

  3. ATLAS TDAQ system TDAQ 1 selected event every million = Latency Rates Muon Calo Inner 40 MHz Pipeline Memories LEVEL-1 TRIGGER • Hardware-Based • Coarse granularity from calorimeter & muon systems LVL1 ~2 ms Readout Drivers ~75 kHz ROD ROD ROD RoI ROB ROB ROB High-Level Trigger LEVEL-2 TRIGGER • Regions-of-Interest “seeds” • Full granularity for all subdetector systems • Fast Rejection “steering” Readout Buffers ~1600 LVL2 farm LVL2 ~10 ms Event builder network ~2 kHz EVENT FILTER • “Seeded” by Level 2 result • Full event access • Algorithms inherited by offline EF farm ~1000 CPUs EF ~1 s ~200 Hz Storage: ~ 300 MB/s ( ) V. Vercesi - INFN Pavia

  4. TDAQ Steering Group • The role of the TDSG in the next two years will be more focused on project planning and progress monitoring (strategic, financial) • Relying more on the 3 coordination structures for detailed technical follow-up • Ex-officio presence according to agenda (includes links to offline, DB & commissioning) • Experts and coordinators of system-wide activities invited as appropriate • This is a proposal for 2005 • Reserve the possibility to propose modifications if needed V. Vercesi - INFN Pavia

  5. S. Falciano (Roma1) Coordinatore Commissioning HLT • A. Negri (Pavia) Coordinatore Event Filter Dataflow • A. Nisati (Roma1) TDAQ Institute Board chair e Coordinatore Muon Slice PESA • F. Parodi (Genova) Coordinatore b-tagging PESA • V. Vercesi (Pavia) Deputy HLT leader e Coordinatore PESA (Physics and Event Selection Architecture) • E numerose persone che hanno agito da forza trainante e da punto di riferimento per diverse attività durante il Combined Test Beam • Attività italiane • Trigger di Livello-1 muoni barrel (Napoli, Roma1, Roma2) • Trigger di Livello-2 muoni (Pisa, Roma1) • Trigger di Livello-2 pixel (Genova) • Event Filter Dataflow (Pavia, LNF) • Event Filter Muon Algorithms (Lecce, Pavia, Roma1) • DAQ (LNF, Pavia, Roma1) • Monitoring (Pavia, Pisa, Cosenza, Napoli) • DAQ CTB (TDAQ + gruppi detector) V. Vercesi - INFN Pavia

  6. New TDAQ Organization • Italian activities and roles • Pre-series procurements • Status, deployment • Documentation • Activities • Combined Test Beam results • Monitoring and ROD Crate DAQ • Algorithms development • Planning and outlook • Systems commissioning • Cosmic data taking V. Vercesi - INFN Pavia

  7. Pre-series 1EvFiltrack SDX1 USA15 1ROSrack 1L2rack R C C 1L2-miscrack 1EFIOrack One Full L2 rack-DAQ rack- 32 HE PC Part of EvFilt rack-DAQ rack- 12 HE PC One ROS rack-TC rack+ horiz. Cooling- 11 ROS 44 ROBINs One central switch-DAQ rack- 128-port Geth for L2+EB Part of EFIO rack-DAQ rack- 10 HE PC(6 SFI - 2SFO - 2DFM) Part of ONLINE rack-DAQ rack- 4 HE PC(monitoring) - 2 LE PC(control) One L2-misc rack-DAQ rack- 50% of RoIB - 3 LE PC(1pROS - 2L2SV) 5.5 L2+EB Switch 1ONLINErack All racks : one or more Local File Servers - One or more Local Switches “Module-0” of final system - 7 racks (10% of final dataflow) V. Vercesi - INFN Pavia

  8. Accounting • CERN driven Market Survey to understand current costs versus technical specifications has been longer than expected • Some delay also due to specs definition itself, re-worked as a follow-up of CTB experience concerning reliability • INFN approved contribution shared as • Read-Out Systems: 51 kCHF (ROS Racks) • Online Computing System: 40 kCHF (Monitoring, Operations) • Online Network System: 44 kCHF (Switches, FileServer) • Description of components and specifications now available on EDMS • Together with the experience of deployment in 2005 this will form the base for procurements of items in 2006 and onwards V. Vercesi - INFN Pavia

  9. New TDAQ Organization • Italian activities and roles • Pre-series procurements • Status, deployment • Documentation • Activities • Combined Test Beam results • Monitoring and ROD Crate DAQ • Algorithms development • Planning and outlook • Systems commissioning • Cosmic data taking V. Vercesi - INFN Pavia

  10. … to reality … H8: from drawings… First Muon Chambers Electromagnetic Calorimeter Hadronic Calorimeter Beam Line Transition Radiation Tracker … to G4 simulations … V. Vercesi - INFN Pavia

  11. 2004 ATLAS Combined Test Beam • Main scope: runs with combination of detectors • Full ATLAS Barrel Slice and Muon end cap on H8 • Four important aspects • Calibrate the calorimeters in a wide range of energies (1-350 GeV) • Finalize the trigger studies with LVL1 Muon and Calorimeter • Study commissioning aspects and get experience with final elements of the readout • Study the detector performance of an ATLAS Barrel slice • Pre-commissioning activity • Shorter time to commission • Learn to integrate, operate the system • Find problems in advance • Executive summary • All systems of TDAQ have been integrated with detectors, with other parts of TDAQ, with data bases and with offline software • TDAQ time as service has been much bigger than as client • Setup was really big and detectors needed more time than expected to debug their own elements and functionalities • An impressive amount of information and experience collected • The TDAQ italian community wishes to thank the CSN1 and our referees for the support given to this activity V. Vercesi - INFN Pavia

  12. TDAQ @ CTB • TDAQ in ATLAS test beam has used latest prototypes to provide support for ATLAS activity • for a duration of eight months (on-call 24x7) ! • The same releases of software are used for test beam, for performance measurements in test beds and as a base for further development • It has shown how complex a system it is and has measured its level of development • It always required TDAQ experts to set it up • Many italians in the support teams • TDAQ went to beam test with the experts • The support effort has been a key element for the CTB operations • All the infrastructure and general PCs were supported by TDAQ (network boot, DHCP etc…) V. Vercesi - INFN Pavia

  13. TDAQ setup in CTB 101010100010001001001000100010110 101010100010001001001000100010110 101010100010001001001000100010110 101010100010001001001000100010110 101010100010001001001000100010110 101010100010001001001000100010110 101010100010001001001000100010110 101010100010001001001000100010110 101010100010001001001000100010110 101010100010001001001000100010110 101010100010001001001000100010110 ROS ROS ROS ROS ROS ROS ROS ROS ROS ROS ROS TGC CSC LVL1calo RPC SCT Pixel MDT Tile LAr LVL1mu TRT data network (GbE) monitoring run control SFO SFI Storage Local LVL2 farm Compared to ATLAS • ~10% of DAQ • ~2% of HLT just counting PCs… LVL1 Contains the LVL2 result that steers/seeds the EF processing pROS Remote Farms: Poland Canada Denmark Muon Event Builder Infrastructure tests only DFM gateway Calo Local EF farm Tracker EF farm @ Meyrin (few Km) CASTOR (IT) V. Vercesi - INFN Pavia

  14. Integration of software • Components developed by different groups, often separately, are exercised together • Detector DAQ using ROD crate DAQ skeleton by TDAQ • Online SW (control, configuration, user interface, monitoring tools) • Data Flow (RCD, ROS, flow of data to LVL2 processors, Event Building, flow to EF, storage) • Detector monitoring (detector specific, using DAQ infrastructure) • High Level Trigger (selection algorithms, developed in Offline environment, run on LVL2 and EF processors) • Offline analysis (Athena framework, unpacking of raw data, analysis algorithms) • Conditions Data Base, link from Detector Control System to Offline • Huge dependencies in many corners on availability of off-line software components • Online-offline systems tightly coupled at various levels: need revised assessment of costs-benefits ratio • E.g. only next Athena release 10.0.1 will be “consolidated” release for CTB analysis V. Vercesi - INFN Pavia

  15. ROD Crate DAQ R C P R O D R O D R O D R O D F.E. Electronics Event Fragments (Detector specific) Config & Control VME bus Event sampling & Calibration data … ROD Crates NIC Config & Control Event sampling & Calibration data ROD Crate LAN (GbEth.) Workstation ROLs ROD Fragments … ROS PCs ROBIN ROBIN ROBIN … • Total number of ROD crates: 90 • Total number of ROS PCs: 144  All in USA15 (underground) PCI bus ROB Fragments ROS Fragments GbEth. L2 & Event Builder Networks The ROD Crate DAQ (RCD) provides Data Acquisition functionality at the level of the Read-Out Drivers Satisfy the need for detectors ROD crate centralized and uniform support for local processing, configuration, event sampling, … V. Vercesi - INFN Pavia

  16. Event Filter Dataflow design SFI SFI PT #3 PT #1 PTIO PTIO PT #2 PTIO SFO SFO SFO PTIO PTIO PT #b PT #a • The EFD function is divided into different specific tasks that could be dynamically interconnected to form a configurable EF dataflow network • The internal dataflow is based on reference passing • Only the pointer to the event (stored in the sharedHeap) flows among the different tasks • Tasks that implement interfaces to external components are executed by independent threads (Multi Thread design) • In order to absorb communication latencies and enhance performance • Proven to be a solid and versatile programming paradigm coupling effectively to modern PC architectures (SMP) Node n EFD Input Input Monitoring Sorting Calibration ExtPTs ExtPTs Trash Output Output Output Calibration data Debugging channel Main output stream V. Vercesi - INFN Pavia

  17. GNAM Monitoring Transitions from users or controller Commands to MPs File on disk CORE EFS Data Flow ES Event Monitoring OHistogram Service User lib User lib User lib GNAM Monitoring Process File Sampler Event Sampler Online Histogramming Service Interactive Presenter ROD/ROS/SFI/SFO Detector Groups DAQ/Online SW Group GNAM-Monitoring Group • Starting from experience at previous TB, a group of people developed a complete chain for monitoring (GNAM Monitoring Tool) • P. Adragna, M. Della Pietra, A. Dotti, R. Ferrari, C. Roda, W. Vandelli, P.F. Zema • GNAM has been used since the first day of CTB to monitor the beam detectors • During the CTB, several detector groups provided their specific libraries (TileCal, MDT, Pixels, RPC) • GNAM was a useful tool, especially at the beginning, to understand the detector behaviour, to find faulty states and to get electronic calibrations V. Vercesi - INFN Pavia

  18. Monitoring: the Gatherer Readout System ALARMS & Status Displ. Intelligent Monitoring Intelligent Monitoring ROB,ROS,SFI,SFO,… Mon Mon Mon Mon Mon Mon Mon Mon Mon Mon Mon Mon Mon Mon Mon Display Shift Crew Display Experts Display Experts Gatherer Subd. Slow Ctrl. DBS LVL1 Var. Ref. DBS Gatherer Subd. LVL2/EF Slow Control Tier 0 Monitoring DBS Archiver Archiver Gatherer Rec Calibration FARM Data Qual. DBS Data Quality Assessment Gatherer Calib. Dynamic Allocation Of Links online Var. Conf. DBS • About 10 monitoring algorithms were publishing between 800 and 1000 histograms concurrently • Including detector standalone, correlations, and EF performance • The latency overhead induced by the monitoring steps is at present acceptable (needs more validation) V. Vercesi - INFN Pavia

  19. PESA • Physics and Event Selection Architecture • In the HLT the selection strategy is built around the identification of physics objects • PESA Core SW is responsible for the implementation of the Steering and Control • Built around standard Athena components • PESA Algorithms evolves and develops HLT software algorithmic tools using realistic data access and handling • LVL2 specialized algorithms, EF algorithms adapted from off-line • Important deployment in HLT testbeds • PESA Validation and Performance applies tools in a structured way to data samples to extract efficiency, rates, rejection factors, physics coverage • Builds on past experience from TP and TDR • CERN/LHCC 2000-17 and CERN/LHCC 2003-022 • Stems from established structure, laid out in parallel with the organization of the Combined Performance working groups, in 5 main lines (“vertical slices”) • Electrons and photons • Muons • Jets / Taus / ETmiss • b-tagging • B-physics V. Vercesi - INFN Pavia

  20. Muon slice • LVL2 and EF Muon algorithms have been extensively tested on data simulated in ATLAS • LVL2: Fast • Task: confirm the LVL1 trigger with a more precise Pt estimation within a Region of Interest (RoI) • Global pattern recognition, track fit, fast Pt estimate via Look Up Table with no use of time consuming fit methods • Event Filter: TrigMoore • Based on offline reconstruction algorithm Moore • Can run seeded (reconstruction starting from RoI of previous levels) • Precise Pt determination • Moore (offline version) already successfully tested as EF during 2003 Test Beam • The test beam 2004 has been a fundamental step forward to test the complete muon trigger slice, including HLT steering and seeding V. Vercesi - INFN Pavia

  21. Muon Level-2 partition DAQ Run Control showing the L2 partition up and running with L1, RPC and MDT Beam profileson MDT and RPC Mdt hit clusters displayed by the online presenter • Further integration during (and after…) combined 25 ns run • Code stable: Level-2 with mFast introduced in the standard DAQ partition • Communication between mFast and TrigMoore was correct • Muon sagitta reconstructed at Level-2 but correlation with EF incomplete • However all HLT functionalities have been succesfully tested V. Vercesi - INFN Pavia

  22. MuFast • MuFast pattern recognition and data preparation both work very well in both testbeam and testbed • Data preparation time is one of the most problematic issues in PESA • MuFast is today the only algorithm compliant with LVL2 latency (10 ms) • Work in progress to assess rate evaluation and efficiencies • Big planning for this year is the extension to the endcap • In collaboration with Israeli and US groups • Need also better assessment of Detector Description compliance (GeoModel) V. Vercesi - INFN Pavia

  23. TrigMoore • Huge activity to study TrigMoore performance in presence of cavern background (safety factors 1 to 10) and pile-up events at 1x and 2x 1033 • Fake muons rate may become particularly important when algorithm applied at the EF “unseeded” by LVL2 • Good performance of the seeded version today (latency) • Need extension to the endcap • Need also better evaluation of physics performance V. Vercesi - INFN Pavia

  24. LVL1 • LVL1 simulation is of course an integral part for the measurement of the full muon slice performance • Lot of work done in the past • Cabling, efficiency, robustness • Next steps (with available manpower) • Efficiency studies with cavern background using DC1 data • Careful evaluation of needed statistics (signal and background): big load on italian farms • Building of “horizontal slice” including the end-caps to assess LVL1 trigger rates on full eta range • New topics (with manpower to define..) • Efficiency studies with signal samples from DC2 production • Production starting up at CERN with Geant4 and latest spectrometer layout • Background studies with Geant4 • Detailed study of LVL1 timing (cabling, time-of-flight) • Cosmic trigger • Physics rates efficiency V. Vercesi - INFN Pavia

  25. b-tagging selection • Identify variables to discriminate between b-jets and u-jets • d0/sd0(pT ) (sd0~ 25µm at high pT ) • z0: need primary vertex reconstruction after track reconstruction. Using the same algorithm as in the seed formation we get 200µm (enough precise considering the (similar) z0 resolution of the tracks) • Number of tracks in the RoI • Energy fraction of the b candidate • For each variable compute the weight variable W and the discriminant variable X • Evaluate rejection at LVL2 and efficiency for tagging • Combination of the two most effective variables (d0/sd0 and z0) using 2D pdf’s (accounts for the full correlation between variables) • New results e(50%) = 12.0, e(70%) = 4.5. • Old results (d0 only) e(50%) = 7.0, e(70%) = 3.0 • B-physics implications under study V. Vercesi - INFN Pavia

  26. New TDAQ Organization • Italian activities and roles • Pre-series procurements • Status, deployment • Documentation • Activities • Combined Test Beam results • Monitoring and ROD Crate DAQ • Algorithms development • Planning and outlook • Systems commissioning • Cosmic data taking V. Vercesi - INFN Pavia

  27. PESA Validation & Performance • Building of Trigger Menus • Evolve and complement the work done in the present slices • Slices will always be part of the PESA validation process • People developing and trying algorithms will necessarily apply them to some sample in order to extract information about their behaviour • “Slices” however are only ingredients of the recipe we need in the runtime phase of ATLAS, where the complete Menu is the only global element that can be optimized against "environmental" conditions (detector knowledge, machine background, etc) • Operate steering on multiple combination of objects • Physics validation use-cases • List of items of increasing complexity, moving from simple processes used now (like Z  2e or Z  2m) to others capable of addressing more complex menus (like H  2e2m or top or …) • Need feedback and help to select most interesting ones • Study feasibility of an exercise similar to the Athens one for physics, where a mixture of signal samples (plus some background) is produced and the Trigger Menu is tested (blindly) against those data • PESA Selection commissioning • On a time scale even earlier than the "final" Trigger Menu • Need to be ready for the cosmic data taking • Prepare modified algorithms if needed (e.g. non pointing tracks) • Understand detector needs and collect corresponding requirements in advance V. Vercesi - INFN Pavia

  28. ATLAS Commissioning Phases • Commissioning means bringing ATLAS systems from “just installed” to “operational”. It is broken in 4 phases • Subsystem standalone commissioning • DCS, LV, HV, cooling, gas safety, DB recording and retrieving • DAQ: pedestal runs, electronic calibration, write and analyze data • Integrate subsystems into full detector • Skeleton TTC needs to be available • Cosmic rays, recording data, analyze/understand, distribute to remote sites • Ad-hoc DAQ, Trigger and algorithms will be needed • Single beam, first collisions, increasing rates, etc… • Wow… • A sensible part of commissioning activities will be done during the installation itself • Phases will overlap since different systems may be in different phases • For the barrel calorimeter electronics commissioning will start soon • Tile calorimeter will start cosmics data taking this fall V. Vercesi - INFN Pavia

  29. HLT Commissioning • Commissioning is a set of activities which spans the time interval from the installation of the HLT racks and nodes … • A rack is the elementary unit for commissioning • The cooling, power, network cables are connected • OS, Dataflow and Online software are installed • ... to the phase when the HLT is filtering physics data and recording them • HLT selection algorithms are installed and running stably • The complete trigger menu (at least for early physics) is configured • The trigger selection efficiencies and background rejection rates are understood and can serve as input for physics measurements • It is also clear that the time scales are “shifted” with respect to the rest of detectors • Installation will happen later than for other systems • Phase-1 Commissioning definition is the most urgent • Heavily use the Pre-series to exercise the procedures for installation and commissioning • Important steps will cover the integration of detectors into full system • Involve operations that have a very strong coupling with the offline commissioning activities • Development of specific algorithms looking at simple data decoding (cabling,…) • Final commissioning phases extend far beyond the data-taking startup (interface with run coordinator team) • Need good coordination with physics groups • Need to think as the trigger as a whole object to be commissioned (including LVL1) V. Vercesi - INFN Pavia

  30. Cosmic muons in ATLAS Concrete Surface building PX14/16 shielding (2.5 g/cm3) Air PX16 (12.6 m Inner Ø ) PX14 (18.0 m Inner Ø) ATLAS Geant Simulation Initial detector Rock ~ Silicon 600m x 600m x 200m deep (2.33 g/cm3) V. Vercesi - INFN Pavia

  31. Cosmic trigger issues • How to trigger on cosmics? • RPC, TGC (?) • From preliminary full simulations of LVL1 • Cosmics: up to ~100 Hz pass low-pT RPC LVL1 • How to increase cosmic trigger acceptance? • Exciting last opportunity! • After that, one will only be asked to reduce trigger rates… • Muon system • Requirement for cosmics (and beam-halo) triggers included in design: • e.g. trigger ASICs include programmable delays to compensate for TOF of down-going cosmic-ray muons in barrel • Projectivity constraints result from cabling between planes of trigger chambers • Lot of flexibility in the system • Timing adjustments • Open L1 roads • Relax coincidence requests • At LVL2 modified trigger algorithms can help in selecting non-pointing muon • Tile Calorimeter system • RPC commissioned later than foreseen • June 2005 : Tilecal in the pit equipped with electronic • commissioning with cosmics can start • need self-triggering scheme while waiting for RPC • consider back-to-back trigger towers ( x =0.1 x 0.1, full calo depth) • ask E > 1.5 GeV in both towers • Expected rate from full simulation : ~ 130 /hr for 16 top+16 bottom module • Ongoing studies to refine present understanding • Soon to be checked with real measurements V. Vercesi - INFN Pavia

  32. Milestones and Finance • 30/06/2005 • TDAQ - Installazione, test e uso della "Pre-serie" (~ 10% TDAQ slice) • 24/12/2005 • TDAQ - Installazione e test dei ROS di Pixel, LAr, Tile, Muon (interfacciamento al ROD Crate e integrazione nel DAQ) • CORE budget allocato per il 2005 è di 214 k€ • TDAQ Resource Committee (VV partecipa) sta attualmente pianificando i dettagli degli impegni finanziari e dello share • INFN impegnato su Read-Out System e Online components • Non si prevedono modifiche sostanziali al piano di share • Procederemo agli acquisti (molto probabilmente sempre attraverso ordini CERN), previa comunicazione ai referee, non appena possibile V. Vercesi - INFN Pavia

  33. Cost Profile (kCHF) V. Vercesi - INFN Pavia

  34. Conclusioni • Stato attuale del progetto HLT/DAQ ben allineato con le scadenze previste nel 2005 • Molti piccoli dettagli certamente da valutare con attenzione perché il progetto è estremamente complesso e anche le responsabilità italiane coprono diversi settori • Sarebbe estremamente positivo avere maggiori contributi alla forza lavoro ora che lo sforzo di costruzione è terminato • Successo delle attività al Combined Test Beam • Molti italiani in ruoli di grande visibilità • Abbiamo imparato tante cose, dobbiamo trovare il tempo di fermarci a riflettere • Lo sviluppo degli algoritmi procede bene, maggiore enfasi sarà via via posta sulle misure di performance di fisica complesse • La nuova struttura organizzativa del progetto è definita • Evolverà ulteriormente avvicinandosi al periodo di presa dati • La componente italiana è ben rappresentata • Riconoscimento di tutti gli impegni portati a termine con successo dai nostri ricercatori in questi anni V. Vercesi - INFN Pavia

More Related