1 / 40

Physics - Stored information - Reconstructed physics

Trigger, DAQ, Computing Focus on LHC collider experiments but look at other experiments / accelerators as well Event selection Data acquisition Online (HLT) and offline computing Look at previous - recent - future development. Physics - Stored information - Reconstructed physics. online.

simoned
Download Presentation

Physics - Stored information - Reconstructed physics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Trigger, DAQ, ComputingFocus on LHC collider experiments but look at other experiments / accelerators as wellEvent selectionData acquisitionOnline (HLT) and offline computingLook at previous - recent - future development

  2. Physics - Stored information - Reconstructed physics online offline Trigger-DAQ-Computing

  3. Physics - Stored information - Reconstructed physics online offline Trigger-DAQ-Computing

  4. Electronic / digital path from detector to physicsFirst part: Trigger + DAQ • Analog/digital electronics channels per detector element • Analog electronics: amplifiers, shapers • Cabled up or directly embedded with detector elements • Data buffering - analog or digital • Digitization • Part or all of the information used for event selection • Hardware triggers: first, fastest selection stage • Necessary for very fast selection, based on partial data • If rates too high for software event selection even on large processor farms (LHC; not so for ILC) • Software triggers: one or several slower stages of increasing complexity which need most/all of the data • Standard nowadays, fast compute hardware / parallelism prerequisite • Selection algorithms preferrably the same as used offline • Data acquisition • Main data path, mostly in parallel to trigger path • Synthesizes the full picture from all channels: event building • Includes controls and monitoring • Data storage Trigger-DAQ-Computing

  5. High rates: Proton-Proton Collisions at the LHC n n   e e µ µ p p n n µ + µ - Z p H p Z µ + µ - Heavy particle production 10+3…-7 Hz(W, Z, t, Higgs, SUSY,…) Nominal LHC Parameters: 7 TeV Proton Energy 1034cm-2s-1 Luminosity 2808 Bunches per Beam 1011 Protons per Bunch (full specification here) 0.6 A 25ns 7.5m Bunch Crossings 4x107 Hz Proton-Proton Collisions 109 Hz Quark/Gluon Collisions Trigger-DAQ-Computing

  6. Some details of the LHC accelerator / storage ring every 10th RF bucket filled with protons (plus some extra gaps along the circumference) Trigger-DAQ-Computing

  7. Event Rates and Multiplicities stot(14 TeV) ≈ 100 mbsinel(14 TeV) ≈ 70 mb cross section of p-p collisions R = L x inel = 1034cm-2 s-1 x 70mb = 7·108 Hz N = R / Dt= 7·108 s-1 x 25·10-9 s = 17.5 = 17.5 x 3564 / 2808 (not all bunches filled) = 23 interactions / bunch crossing (pileup) 14 TeV cross section / mb R = event rate L = luminosity = 1034 cm-2 s-1 inel = inelastic cross section = 70 mb N = interactions / bunch crossing Dt = bunch crossing interval = 25 ns cm energy / GeV nch≈ 50 Nch = nch x 23 = ~ 1150 Ntot= Nch x 1.5 = ~ 1725 per bunchcrossing: • 23 Minimum Bias events • 1725 particles produced nch = charged particles / interaction Nch = charged particles / BC Ntot = all particles / BunchCrossing Trigger-DAQ-Computing

  8. Select signal in high background Trigger-DAQ-Computing

  9. Looking for Interesting Events Higgs → ZZ → 2e+2m 23 min bias events Trigger-DAQ-Computing

  10. ATLAS and Event Sizes Inner Detector Channels Fragment size - kB Muon Spectrometer Channels Fragment size - kB Pixels 1.4x108 60 MDT 3.7x105 154 SCT 6.2x106 110 CSC 6.7x104 256 TRT 3.7x105 307 RPC 3.5x105 12 TGC 4.4x105 6 Calorimetry Channels Fragment size - kB Trigger Channels Fragment size - kB LAr 1.8x105 576 LVL1 28 Tile 104 48 Affordable mass storage b/w: 300 MB/sec ☛ 3 PB/yearfor offline analysis Atlas event size: 1.5 MBytes140 million channelsorganized into ~1600 Readout Links 20 m Trigger-DAQ-Computing

  11. ATLAS Three-Level Trigger Architecture 2.5 s ~10 ms ~ sec. • LVL1 decision made in hardware with calorimeter data with coarse granularity and muon trigger chambers data. • Buffering on detector • LVL2 uses Region of Interest data (ca. 2%) with full granularity and combines information from all detectors; performs fast rejection. • Buffering in ROBs • EventFilter refines the selection, can perform event reconstruction at full granularity using latest alignment and calibration data. • Buffering in EB & EF Trigger-DAQ-Computing

  12. Digitizers and digital data paths Flash ADC principle • Digitizers • Analog to digital - ADCs • Time to digital - TDCs • Flash ADS - FADCs • Buses • VME as example • More and more replaced by serial connections, even for memory • Processors • 68k family - still around • Transputers (ZEUS, OPAL) - history • Digital Signal Processors DSPs in front end electronics boards, programmed in C • Intel family processors, towards multiple cores on one chip now everywhere in DAQ systems, running Linux • CMOS insulator layer too thin for further reduction of structure size - no large increase of clock frequency Trigger-DAQ-Computing

  13. VME bus: history (not only) MC68040 processor e.g. ADC (AD7821) • Asynchronous bus, originally developed for Motorola 68k microprocessors • Main data path in LEP experiments (together with Fastbus, VSB, Ethernet; even CAMAC) • Still widespread use in experiments • As controls bus (configure electronics, ...) • But also for transfer of event data in slow fashion (ATLAS “ROD crate DAQ”) n bus masters (active requestors) m bus slaves 1 bus - 1 controller: shared resource Address and data (8...64bit), handshake signals Distribution of real time signals e.g. trigger Arbitration among bus masters Trigger-DAQ-Computing

  14. Trigger levels at LHC Trigger-DAQ-Computing

  15. Trigger + Data Acquisition (TDAQ) - more detail Calo MuTrCh Other detectors 40 MHz 40 MHz D E T RO LV L1 Frontend Pipelines ROD 2.5 ms RoI 75 kHz Lvl1 acc = 75 kHz Read-Out Drivers RoI data = 2% 120 GB/s RRC ROD-ROB Connection ROB ~ 10 ms RoI Builder LVL2 D A T A F L O W ROIB L2SV RoI requests Read-Out Buffers RRM L2 Supervisor L2 N/work L2 Proc Unit L2N L2P ROD-ROS Merger ROS 2.5 kHz Read-Out Sub-systems EBN 3+3 GB/s Lvl2 acc = 2.5 kHz Dataflow Manager Event Building N/work SFI ~ sec DFM Event Filter 3 GB/s Sub-Farm Input EFN Event Filter Processors EFP EFP EFP EFP Event Filter N/work Tier0 SFO EFacc = ~0.1 kHz Sub-Farm Output ~ 200 Hz ~ 300 MB/s Trigger DAQ Trigger-DAQ-Computing

  16. ATLAS Trigger Level 1 (LVL1) - Muons and Colorimetry Toroid Muon Trigger looking for coincidences in muon trigger chambers 2 out of 3 (low-pT; >6 GeV) and 3 out of 3 (high-pT; > 20 GeV) Trigger efficiency 99% (low-pT) and 98% (high-pT) Calorimetry Trigger looking for e// + jets • Various combinations of cluster sums and isolation criteria • ETem,had , ETmiss Trigger-DAQ-Computing

  17. Region of Interest (RoI) Mechanism 2 2e LVL1 triggers on high pT objects • Calorimeter cells and muon chambers to find e/g/t-jet-m candidates above thresholds • LVL2 uses Regions of Interest as identified by Level-1 • Local data reconstruction, analysis,and sub-detector matching of RoI data • The total amount of RoI data is minimal • ~2% of the Level-1 throughput but it has to be extracted from the rest at 75 kHz EF can use full event data • And also the results obtained by LVL2 H →2e + 2 Trigger-DAQ-Computing

  18. ATLAS High Level Trigger selection and configuration HLT strategy: refinement of TriggerElements (seeded from LVL1) in stepwise processing, perform stepwise decisions HLT uses the offline reconstruction SW framework ATHENA on ~2000 processor boxes example of step-wise processing: parts to be configured: 1) HLT Menu: determines which algorithms are called at which step and which signatures need to be fulfilled for accepted events 2) all configuration parameters of the algorithms, called JobOptions (JO), compatibility with offline important 3) release information  Consistency (with LVL1) important Trigger-DAQ-Computing

  19. LVL1 Trigger Thresholds + Rates LVL1 rate is dominated by electromagnetic clusters: 78% of physics triggers Trigger-DAQ-Computing

  20. Inclusive Higher Level Trigger (HLT) Event Selection HLT rate reduces by e/ a lot: 45% of physics triggers Trigger-DAQ-Computing

  21. More examples of Trigger + DAQ systems LHC: • Variety of different architectures • Partly because of different requirements • cf. ALICE vs. LHCb • Partly because of different emphasis and design goals • cf. ATLAS vs. CMS • Wherever possible commercial solutions are used • Not necessarily always commodity Farther future: • CBM (Compressed Baryonic Matter) • Heavy Ion experiment at GSI (~2014) • ILC (International Linear Collider) • Timescale ~2017 • CLIC (Compact Linear Collider) • Timescale ~202x (or so) Trigger-DAQ-Computing

  22. The four LHC experiments Trigger-DAQ-Computing

  23. ALICE - Heavy Ions • Pb-Pb beam • BX Rate 10 MHz • L2 Rate Max. ev. size • Central 20 Hz 86.0 MB • MB 20 Hz 20.0 MB • Dimuon 1600 Hz 0.5 MB • Dielectron 200 Hz 9.0 MB • pp beam • BX Rate 40 MHz • Minimum Bias 100 Hz 2.5 MB Trigger-DAQ-Computing

  24. ALICE: logical model of online system Trigger-DAQ-Computing

  25. ALICE: architecture PDS TDS TDS Rare/All CTP L0, L1a, L2 BUSY BUSY LTU LTU DDL H-RORC L0, L1a, L2 HLT Farm TTC TTC FEP FEP FERO FERO FERO FERO Event Fragment Sub-event Event File 10 DDLs 10 D-RORC 10 HLT LDC 163 DDLs 343 DDLs 425 D-RORC 200 Detector LDC D-RORC D-RORC D-RORC D-RORC D-RORC D-RORC LDC LDC LDC LDC LDC Load Bal. Event Building Network EDM 50 GDC 25 TDS GDC GDC GDC DSS DSS GDC 5 DSS Storage Network Trigger-DAQ-Computing

  26. ATLAS vs. CMS • Similar figures yet different architectural choices • ATLAS (already shown): minimize bandwidth • CMS: maximize flexibility Trigger-DAQ-Computing

  27. CMS: Trigger+DAQ architecture • Feature • Solve 1/8th of the problem and replicate the solution (DAQ Slice) • 8 parallel readout systems - with independent Triggers • Highly scalable • Full Readout at HW trigger rate Trigger-DAQ-Computing

  28. CMS: Trigger+DAQ architecture - other view • All data read out at hardware trigger rate • Full flexibility in application of software trigger • Profit from increasing CPU power with time for more sophisticated trigger algorithms • No latency limitations for trigger algorithms (at ATLAS, limited for LVL2) Trigger-DAQ-Computing

  29. ATLAS Trigger/DAQ dataflow at Point1 SDX1 dual-CPU nodes CERN computer centre ~30 ~1600 ~100 ~ 500 Second- level trigger Local Storage SubFarm Outputs (SFOs) Event Filter (EF) Event Builder SubFarm Inputs (SFIs) LVL2 farm Event rate ~ 200 Hz Data storage pROS stores LVL2 output pROS DataFlow Manager Network switches stores LVL2 output Network switches LVL2 Super- visor Event data requests Delete commands Gigabit Ethernet Requested event data Event data pulled: partial events @ ≤ 100 kHz, full events @ ~ 3 kHz Regions Of Interest USA15 Data of events accepted by first-level trigger 1600 Read- Out Links ~150 PCs VME Dedicated links Read- Out Drivers (RODs) ATLAS detector Read-Out Subsystems (ROSs) RoI Builder First- level trigger UX15 Timing Trigger Control (TTC) Event data pushed @ ≤ 100 kHz, 1600 fragments of ~ 1 kByte each SDX1 USA15 UX15 Trigger-DAQ-Computing

  30. LHCb - CP violation in B system Trigger-DAQ-Computing

  31. LHCb: DAQ characteristics • Single-Level hardware trigger • Full readout of the entire detector at 1 MHz • Full flexibility for trigger algorithms within limits of available CPU power • Take full advantage of new CPU architecture (Multi-Core) since there is no a latency restriction • GbEthernet as uniform link technology from Front-End electronics to Storage • Cheap, especially if copper links can be used • Scalable, large range of available speeds (100, 1000, 10000 Mb/s) • Large switches available (>1200 GbEthernet Ports) • Scalable • To size of one switch • Extensible, applying the CMS methodology of decomposing the DAQ system in two or more ‘disjoint’ subsystems Trigger-DAQ-Computing

  32. CBM - Condensed Baryonic Matter • Heavy Ion experiment planned at future FAIR facility at GSI (Darmstadt) • Timescale: ~2014 Trigger-DAQ-Computing

  33. CBM: event characteristics • Hardware triggering problematic • Complex Reconstruction • ‘Continuous’ beam • Trigger-Free Readout • Self-Triggered channels with precise time-stamps • Correlation and association later in CPU farm Trigger-DAQ-Computing

  34. CBM: DAQ architecture Trigger-DAQ-Computing

  35. CBM: DAQ characteristics Very much network based  5 different networks • Very low-jitter (10 ps) timing distribution network • Data collection network to link detector elements with front-end electronics (link speed O(GB/s)) • High-performance (~O(TB/s)) event building switching network connecting O(1000) Data Collectors to O(100) Subfarms • Processing network within a subfarm interconnecting O(100) processing elements for triggering/data compression • Output Network for collecting data ready for archiving after selection Trigger-DAQ-Computing

  36. ILC - International Linear Collider 5 Hz 2820 bunches / // / 199 ms ~1 ms ~1 ms • Superconducting e+e--collider at cm Energy ≤1 TeV • Timescale: ~2017 • Machine characteristics • Bunch-Train of length ~1 ms • Bunch spacing ~330 ns • Train repetition rate ~5 Hz • Average bunch crossing rate ~15 kHz • Bunch spacing would allow hardware trigger • But probably not worth the additional complication in front-end electronics  (HW)trigger-free readout  Software-only trigger Trigger-DAQ-Computing

  37. ILC: conceptual ideas for DAQ Trigger-DAQ-Computing

  38. ILC: architectural ideas Trigger-DAQ-Computing

  39. CLIC - Compact Linear Collider • e+e--collider at cm Energy ~3-5 TeV • Novel accelerating techniques (drive beam) • R&D Program, proof of concept by 2009 • Timescale: 202x • no detector design really started  guessing Bunch Trains (~100 Hz) 156 BunchesBunch spacing 0.67 ns Train length ~100 ns • Bunch spacing surely poses a challenge on the detector technology and electronics (“1 storage scope per channel”) • Bunch spacing such that always many bunch crossings overlaid No hardware triggering possible at bunch-by-bunch level  trigger-free readout to software trigger Trigger-DAQ-Computing

  40. Trigger + DAQ summary • Near Future (LHC) DAQ systems • Variety of architectures • Depending on design goals and emphasis • Electronics and link capacities (not yet) capable of operating ‘trigger-free’ • Far-Future DAQ Systems will • Strive for eliminating hardware triggers • Readout at bunch-crossing or train-crossing rate • Send data to CPU farms for event selection • For more flexible triggering and optimal efficiency Some extrapolation of present experience... • Parallel buses (inter- and intra-board) will fade away for DAQ • Cannot compete at high speed with serial technologies (skew) • Examples • disappearance of PCI and replacement by PCI-Express • move from parallel to serial interfaces to memories • Multi-Core CPUs rather than faster and faster single cores • Ideal for ‘long-latency’ applications Trigger-free readout systems • CMOS speedup hits limits (6 atomic layers of insulator...) • power consumption no longer ~ area but ~ # transistors Trigger-DAQ-Computing

More Related