1 / 14

Contents

Contents. LHC physics program detectors (ATLAS, LHCb) LHC T/DAQ system challenges T/DAQ system overview ATLAS LHCb T/DAQ trigger and data collection scheme ATLAS LHCb. CERN and the Large Hadron Collider, LHC.

basil
Download Presentation

Contents

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Contents • LHC • physics program • detectors (ATLAS, LHCb) • LHC T/DAQ system challenges • T/DAQ system overview • ATLAS • LHCb • T/DAQ trigger and data collection scheme • ATLAS • LHCb

  2. CERN and the Large Hadron Collider, LHC LHC is being constructed underground inside a 27 km tunnel. Head on collisions of very high energy protons. ALICE, ATLAS, CMS, LHCb - approved experiments.

  3. The ATLAS LHC Experiment

  4. The LHCb LHC Experiment

  5. The LHCb LHC Experiment - an event signature

  6. Challenges for Trigger/DAQ system • The challenges: • unprecedented LHC rate of 109 interactions per second • large and complex detectors with O (108) channels to be read out • bunch crossing rate 40 MHz requires a decision every 25 ns • event storage rate limited to O (100) MB/s • The big challenge: to select rare physics signatures with high efficiency while rejecting common (background) events. • E.q. H  yy (mH  100 GeV) rate is ~ 10-13 of LHC interaction rate • Approach: three level trigger system

  7. Interaction rate ~ 1 GHz Bunch crossing rate: 40 MHz rate : 100 kHz latency: < 2.5 s throughput 200 GB/s rate : 1 kHz latency: <10ms> throughput 4 GB/s rate : 100 Hz latency: <1 s> throughput 200 MB/s ATLAS Trigger/DAQ - system overview • LVL1 decision based on course granularity calorimeter data and muon trigger stations • LVL2 can get data at full granularity and can combine information from all detectors. Emphasis on fast rejection. Region of Interest (RoI) from LVL1 are used to reduce data requested (few % of whole event) in most cases • EF refines the selection according to the LVL2 classification, performing a fuller reconstruction. More detailed alignment and calibration data can be used CALO MUON TRACKING Pipeline memories LVL1 Readout Driver Readout Driver Readout Driver Regions of Interest Readout Buffer Readout Buffer Readout Buffer LVL2 Event builder EF Data Recording

  8. ATLAS overall data collection scheme Readout Subsystem ROD ROS ROS ROS ROS ROS LVL1 L2SV LARGE SWITCH DATA COLLECTION NETWORK RolB DFM SFI SFI SFI LVL2 FARM SWITCH EF SUBFARM SWITCH EF SUBFARM L2PU SWITCH CPU L2PU

  9. Why trigger on GRID? • First code benchmarking shows that local CPU power may not be sufficient (budget+ manageable size of the cluster)  distribute the work over remote clusters. • Why not? The GRID technology will provide platform independent tools which perfectly match the needs to run, monitor and control the remote trigger algorithms. • Developement of dedicated tools (based on the GRID technology) ensuring quasi real-time response of the order of a few seconds might be necessary  task for CROSSGRID

  10. decision decision Data flow diagram Local high level trigger Experiment Tape Event buffer for remote processing Event dispatcher CROSSGRID interface Remote high level trigger Remote high level trigger ...

  11. Operational features • Event dispatcher is a separate module. Easy to activate and deactivate • Implementation independent on specific trigger solutions for a given experiment • Dynamical resource assignment to keep system running within assumed performance limits (event buffer occupancy, link bandwidth, number of remote centers, timeout rate...) • Fault tollerance and timeout management (no decision within allowed time limit) • User interface to monitor and control by a shifter

  12. decision decision decision Testbed for distributed trigger Easy to test by substituting real experiment data with PC sending Monte Carlo data Monte Carlo Data Event Dispatcher - Monitoring and Control Tool PC at CERN Event Buffer Spain Poland Germany

  13. Summary • Trigger systems for the LHC experiments are challenging • GRID technology may help to solve lack of local CPU power • Proposed distributed trigger structure as a separate Event dispatcher module offers cross-experiments platform independent of specific local trigger solutions. • Implementation on testbed feasible even without running experiments • Dedicated tools to be developed within CROSSGRID project to ensure interactivity, monitoring and control.

More Related