1 / 48

Data acquisition for continuous readout and triggered experiments

This presentation explores the concepts and components of the TDAQ system, focusing on continuous readout and triggered experiments. It covers the basics of TDAQ, its building blocks, and components, including trigger, DAQ, and data processing. The talk also discusses the challenges and strategies for efficient data collection, event building, and storage. Examples of a TDAQ system are provided, along with the importance of experience and understanding the experiment's goals and detector behaviors.

nruiz
Download Presentation

Data acquisition for continuous readout and triggered experiments

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Data acquisition for continuous readout and triggered experiments Giovanna Lehmann Miotto / CERN

  2. Outline • TDAQ concepts & components • Material inspired from ISOTDAQ lectures • Continuous, locally & globally triggered readout • A TDAQ system example Giovanna Lehmann Miotto

  3. TDAQ concepts & components

  4. What is TDAQ? • Trigger and Data AcQuisition system is a vague term • In this talk:The system in an experiment in charge of the selection, readout, event building and storage of the physics data, as well as the control, configuration and monitoring of the data taking operations. • TDAQ is a heterogeneous field • An alchemy of physics, electronics, information and communication technologies • A field in which science and engineering go hand in hand • A field in which experience, open mindedness and understanding of the experiment’s goals and detector behaviors is key to success Giovanna Lehmann Miotto

  5. TDAQ building blocks • Trigger • Either selects interesting physics data or rejects boring ones • Can be as simple as a coincidence (logic AND) or as complicated as a multistage system mixing custom electronics, FPGAs and thousands of sw algorithms • The aim is always to be “as simple and robust as possible” and “as fast as needed” • Trigger directly impacts the quality of physics of the experiment • Malfunctioning trigger => loss of physics • DAQ • Gathers data produced by detectors => Readout • Forms the physics events => Event building • Stores event data => Data logging • Provides Run Control, Configuration, Monitoring Giovanna Lehmann Miotto

  6. TDAQ components N channels N channels N channels TRIGGER data digitization data buffering ADC ADC ADC Front-End ADC ADC ADC ADC ADC ADC data extraction data formatting data buffering Processing Processing Processing Readout event assembly event buffering Data Collection Event Building event rejection event buffering Processing Event Filtering file storage file buffering Event Logging Storage Giovanna Lehmann Miotto

  7. TDAQ basics Trigger disk CPU T sensor ADC Card • Fixed frequency reading • ADC -> analog to digital conversion • CPU processes the data and sends them to storage • System limited by the time τ to do ADC + CPU processing + storage • Max sustainable rate is inverse of τ:e.g.: τ= 1 ms -> ratemax= 1/τ= 1 kHz • Trigger must be tuned to fire at a sustainable fixed rate Giovanna Lehmann Miotto

  8. TDAQ basics: “real” trigger ADC τ= 1 ms Processing disk • Events asynchronous and unpredictable • e.g.: βdecays Giovanna Lehmann Miotto

  9. TDAQ basics: “real” trigger TRIGGER delay discriminator start ADC τ= 1 ms Processing interrupt disk • Events asynchronous and unpredictable • e.g.: βdecays • A physics trigger is needed • Discriminator: generate an output ONLY is amplitude of input greater than a certain threshold • Delay introduced to compensate for trigger latency Giovanna Lehmann Miotto

  10. De-randomization &Busy logics f = 1 kHz l = 1 ms TRIGGER delay NOT start ADC AND BUSY LOGIC Processing latch τ= 1 ms SET Q ready CLEAR disk • Previous sketch too simplistic • Protect DAQ from too close-by triggers! Giovanna Lehmann Miotto

  11. TDAQ basics: “real” trigger TRIGGER delay discriminator Stochastic process:distribution of events in timef = 1 kHz ADC BUSY LOGIC τ= 1 ms eff = (1+fτ)-1 Dead time!! Processing disk • Events asynchronous and unpredictable • e.g.: βdecays Giovanna Lehmann Miotto

  12. De-randomization &Busy logics f = 1 kHz l = 1 ms TRIGGER delay Inter-arrival time distribution NOT AND start ADC BUSY LOGIC FIFO busy (full) dataready Processing access time distribution τ= 1 ms r = τ/ l disk • Previous sketch too simplistic • Protect DAQ from too close-by triggers! • Introduce buffer in order tosmooth out the variableevents inter-arrival time to a steady data processing rate Giovanna Lehmann Miotto

  13. TDAQ basics: multi-channels N channels N channels N channels TRIGGER data digitization data buffering ADC ADC ADC Front-End ADC ADC ADC ADC ADC ADC data extraction data formatting data buffering Processing Processing Processing Readout event assembly event buffering Data Collection Event Building event rejection event buffering Processing Event Filtering Giovanna Lehmann Miotto file storage file buffering Event Logging Storage

  14. TDAQ basics: multi-channels Trigger Readout Readout Readout Event Building Event Building Filtering Filtering Filtering Storage Storage • If readout system is large, multiple nodes may be required to perform event building, filtering and storage • Normally Readout is the last component for which dedicated hardware development is needed • Leverage commercial devices such as networks, computers, disks… Giovanna Lehmann Miotto

  15. TDAQ basics: multi-channels Trigger Readout Readout Readout Event Building Event Building Data Flow Manager Filtering Filtering Filtering Storage Storage • Data flow manager allows to make best use of available resources by flexibly assigning events to the down-stream elements of the TDAQ • Very important especially if the data filtering can take a very variable amount of time and/or if events have very variable event size • It can block the trigger if down-stream resources are exhausted Giovanna Lehmann Miotto

  16. Data collection • As soon as there is more than one data collector it is important to think how to distribute events across them • Static round-robin • Dynamic credit based or least loaded • Decide what should be collected • All data • Only some data used for event filtering • Determine if an application level flow control is needed • Data sent as soon as available -> minimize latency, risk of congestion, data loss • Data sent when destination ready -> traffic shaping, potentially larger latency • The protocols used for data collection depend strongly on • Data size • Data rate • Number of sources Giovanna Lehmann Miotto

  17. Data flow example Trigger Readout Readout Readout Switched network Event ID Event Building Data Flow Manager Filtering Event Building Storage Filtering Storage Giovanna Lehmann Miotto

  18. Data flow example Trigger Readout Readout Readout Switched network Event ID Event ID + EB ID data Event Building Data Flow Manager Filtering Event Building Storage Filtering Storage Giovanna Lehmann Miotto

  19. Data flow example Trigger Readout Readout Readout Switched network Event ID Event ID + EB ID eventID OK data Event Building Data Flow Manager Filtering Event Building Storage Filtering Storage Giovanna Lehmann Miotto

  20. Data flow example Trigger Readout Readout Readout Switched network Event ID Event ID + EB ID eventID OK Event ID + STRG ID data data Event Building Data Flow Manager Filtering Event Building Storage Filtering Storage Giovanna Lehmann Miotto

  21. Data flow example Trigger Readout Readout Readout Switched network Event ID Event ID + EB ID Event ID OK Event ID + STRG ID data data Event Building Data Flow Manager Filtering Event Building Storage Filtering Clear Event ID Storage Giovanna Lehmann Miotto

  22. Data Flow: example 1 Readout Data flow manager Event Building Data Selection Storage data event ID event ID/ EB ID data event ready event ID/ DS ID data accept/reject ID event ID/ storage ID data finished ID clear event ID Giovanna Lehmann Miotto

  23. Data Flow: example 2 Readout Data flow manager Event Building Data Selection Storage data event ID event ID getData data getNext data accept/reject ID getNext data finished ID clear event ID Giovanna Lehmann Miotto

  24. TDAQ basics summary • The readout of any system with non constant trigger rate requires buffering in order to be efficient • A BUSY logics must be implemented in order to prevent triggering when the system is not ready to accept more data • Every DAQ system is formed by those components • Front-end, Readout, Event building, Data logging • Optionally: Event filtering • The order of those components can vary: • E.g. filtering before event building • Those elements can be implemented separately or be integrated • E.g. filtering before event building and event building and storage integrated • Multi-channel TDAQ systems often interconnect the elements via switched networks and the data flow protocol defines how data fragments and events are exchanged Giovanna Lehmann Miotto

  25. Continuous, locally and globally triggered readout

  26. Nomenclature • Globally triggered • What we have considered so far • There is a coherent event ID throughout the readout • Front-end data are organized into fragments associated to the event ID • Locally triggered • A trigger element fires when data should be sent to the readout • The trigger is relative to individual or groups of channels, not to the full front-end • The readout can process incoming data to create fragments corresponding to the trigger • There is no concept of a global event ID at the readout level • Continuous readout • The front-end sends data to the readout at a fixed rate, irrespective of the data content • Data rate and data size are constant in input • There is no indication for the readout on how to group front-end data into fragments corresponding to a physics event Giovanna Lehmann Miotto

  27. Use cases for different readouts • Colliders • Normally use global trigger: if something interesting has been seen somewhere, take all the data corresponding to that bunch crossing • Large distributed telescopes • Often use local trigger: readout data for the portions of the detector that have seen something • Very slow detectors • Sometimes use continuous readout: sample the analogue signals at a fixed rate and let the downstream DAQ decide whether there were any interesting signals Giovanna Lehmann Miotto

  28. Globally triggered N channels N channels N channels TRIGGER data digitization data buffering ADC ADC ADC Front-End ADC ADC ADC ADC ADC ADC data extraction data formatting data buffering Processing Processing Processing Readout event assembly event buffering Data Collection Event Building event rejection event buffering Processing Event Filtering Giovanna Lehmann Miotto file storage file buffering Event Logging storage

  29. Globally triggered: ATLAS@LHC Giovanna Lehmann Miotto

  30. Locally triggered N channels N channels N channels TRG TRG TRG data digitization data buffering ADC ADC ADC Front-End ADC ADC ADC ADC ADC ADC data extraction data formatting data buffering Processing Processing Processing Readout event assembly event buffering Data Collection Event Building ??? event rejection event buffering Processing Event Filtering ??? file storage file buffering ??? Event Logging storage Giovanna Lehmann Miotto

  31. Locally triggered: Auger observatory Giovanna Lehmann Miotto

  32. Continuous N channels N channels N channels CLOCK data digitization data buffering ADC ADC ADC Front-End ADC ADC ADC ADC ADC ADC ??? data extraction data formatting data buffering Processing Processing Processing Readout ??? event assembly event buffering Data Collection Event Building ??? event rejection event buffering Processing ??? Event Filtering Giovanna Lehmann Miotto file storage file buffering Event Logging storage

  33. Continuous readout: DUNE 18m x 17 m x 66 m => 17 ktonLAr Giovanna Lehmann Miotto TPC with ~400 k channels

  34. DUNE • Extremely varied physics program • Neutrino beam -> external trigger possible • Black hole formation -> no trigger • Proton decay -> local and rare signature • TPC sampled at 2 MHz continuous readout, photon detectors sampled at 150 MHz (local triggering) • Signal for a particle forming over msecs • Down-stream TDAQ elements decide when anything interesting happened inside the active volume • Combination over time windows of thresholds, tracking, distributed activity signatures, … Giovanna Lehmann Miotto

  35. Possible DUNE TDAQ APAs Photon Detectors Loc Trigger ADC ADC ADC ADC ADC ADC Trigger primitives ~50 Tb/s Readout Readout Readout Readout L0 Trigger L1 Trigger Ext info Event Builder Event Builder Storage Readout with very large buffer to account for long L0/L1 latency (tens of secs) Trigger primitives extraction integrated into readout (or carried out in separate computer farm) Data compression to reduce storage and network needs Extra processing and data reduction after event builder possible Giovanna Lehmann Miotto

  36. What is an event? • The definition of an event changes according to the readout mode • Globally triggered • All data of the detector corresponding to a specific event ID defined by the trigger • Locally triggered • Data of those detector components which have detected any activity within a time window and have been recognized by the data selection logics to be part of the same physical process • Continuous readout • Data of those parts of the detector for which the data selection logics has identified interesting and coherent activity within a time window • All data of the detector corresponding to a time window in which the data selection logics has detected activity • While in a globally triggered system a common timestamp is a good and important cross-check, in the other cases consistent time-stamping becomes the only mechanism to combine data belonging together! Giovanna Lehmann Miotto

  37. Summary on readout modes • Depending on the type of detector and physics there are three categories of readout: • Globally triggered, locally triggered, continuous • There is no good or bad mode; the right approach needs to be chosen according to the experiment • Sharing a common timing distribution system is a big advantage in any case • An “event” can be defined as the collection of • All detector data corresponding to the trigger • Those detector data for which activity has been detected within a time window • Individual areas of activity in a detector • … Giovanna Lehmann Miotto

  38. ProtoDUNE SP A TDAQ system combining ideas of locally triggered and continuous readout with global triggering

  39. Neutrino Platform @ CERN SPS Giovanna Lehmann Miotto

  40. The ProtoDUNE project • Test and validate the technologies and design that will be applied to the construction of the DUNE Far Detector • Prototypes put on a dedicated beam line at the CERN SPS accelerator complex (charged particles 0.5 – 7 Gev/c), on surface. • The rate and volume of data produced by these detectors will be substantial (O(Tb/s)) • “ProtoDUNE" comprises single-phase and dual-phase Lar TPC detectors. • Today focus on ProtoDUNE SP Giovanna Lehmann Miotto

  41. ProtoDUNE SP • Experiment to validate detector design and construction techniques for DUNE • Detector modules of final size • 4% of detector elements • Aim to record ~6M beam events in 2018 • Cosmics in addition Giovanna Lehmann Miotto

  42. Inside ProtoDUNE SP Giovanna Lehmann Miotto

  43. Anode plane assembly module Giovanna Lehmann Miotto

  44. ProtoDUNE SP TDAQ environment • 6 Anode Plane Assemblies (APA) • TPC ~ 430 Gb/s (continuous readout; 15360 ch@2MHz) • Photon Detectors ~ 1 Gb/s (locally triggered) • SPS super cycle structure: 2 x 4.8 s bursts in 48 s • Full readout -> ~85 Gb/s • Too much for DAQ as well as for storage and offline! • Introduction of a simple global trigger to mitigate data flow • Retain full readout off detector • Cannot rely on triggering on TPC signatures, because there is too much activity from cosmic rays. • Lossless data compression to reduce event size Giovanna Lehmann Miotto

  45. ProtoDUNE SP TDAQ Loc Trigger Photon Detectors APAs ADC ADC ADC ADC ADC ADC Trigger signals ~430 Gb/s Readout Readout Readout Readout Trigger Ext info Event Builder Event Builder < 20 Gb/s Storage • Readout with large buffers to allow exploiting the spill structure • Trigger logics implemented in a custom board • Inputs from beam instrumentation, muon veto, photon detectors • Data compression to reduce storage and network needs • An event is a 5 ms window of all data contained in the readout corresponding to the timestamp of a trigger Giovanna Lehmann Miotto

  46. ProtoDUNE SP TDAQ implementation • On-detector electronics connected to TDAQ only via ~700 fibers ensuring that detector ground remains isolated • Detector very sensitive to noise • 2 readout systems developed for other purposes deployed for TPC (ATCA or PCIe based) • Validate the use of those hardware devices for the future • Network based on 10 Gb/s Ethernet • A farm of ~20 high performance servers ensures the data flow, monitoring and control of the experiment • ½ PB of local temporary storage for data • TDAQ software based on already existing frameworks • artDAQ for data flow + JCOP for run control Giovanna Lehmann Miotto

  47. ProtoDUNE SP example summary • The ProtoDUNE SP TDAQ combines elements of continuous, and locally triggered readout as foreseen for DUNE • Combined with a global trigger to accommodate the test beam environment • This TDAQ system allows to validate several design aspects for the final system • Very large activity due to exposure to cosmic rays, does not allow to select data based on TPC trigger primitives • Reuse of readout hardware and sw frameworks for TDAQ has allowed for a very fast system development • Data taking of one of the largest test-beams ever is foreseen for 2018 Giovanna Lehmann Miotto

  48. Conclusions • We have shown the basic components building a TDAQ system and their interplay • We have highlighted different modes of initiating the data acquisition and defining an “event” • We have used ProtoDUNE SP as a good example in which all described concepts are used • TDAQ systems need to be tuned to the physics goals and characteristics of experiments • Building blocks nevertheless are always the same • Common TDAQ frameworks and generic readout components allow to build TDAQ systems within shorter times and to focus the effort on the experiment instead of on the TDAQ development Giovanna Lehmann Miotto

More Related