200 likes | 211 Views
This document discusses the system-wide triggering and data acquisition (DAQ) requirements for accurate simulation comparison at Imperial College London. Topics include trigger latency, jitter, trigger types, trigger control, trigger distribution, and beam monitoring.
E N D
System-wide triggering and DAQ issues Paul Dauncey Imperial College London, UK Paul Dauncey - DAQ
System requirements • Need high statistics for accurate simulation comparison • Multiple set-ups (energy, particle type, HCAL, angle, etc) ~ 102 • Need high statistics per set-up; accurate to 3s needs ~ 106 • Need clean data for accurate simulation comparison • Remove double particle events and cosmics in time with beam • Minimum trigger bias • Need to take data in a reasonable time • For 108 events total, need around ~100 Hz average • 106 seconds is around two weeks continuous running time • Several months realistic beam time Look here at non-calorimeter elements of the system • Timing, trigger handling and distribution • Beam monitoring and slow controls • DAQ software Paul Dauncey - DAQ
Trigger: requirements • Latency of overall trigger path < 180ns • This is from peak of shaping time in VFE chip for sample-and-hold • HCALs may set stricter requirement; not yet defined • Implies all electronics must be within radiation area • Jitter on trigger < 10ns • This is again from VFE shaping; gives peak within 1% • What is spread of shaping times between VFE chips? (If >10ns within a VFE PCB, then give direct contribution to resolution.) • Again, HCAL may have stricter jitter requirement; as yet unknown (but see later) • Several trigger types must be selectable • Beam, beam veto, cosmic, software, external clock; others? • Allow different trigger types inside and outside a spill; needed? Paul Dauncey - DAQ
Trigger: overview • Use trigger as system-wide synchronisation marker • All timing done relative to trigger arrival time; each system can clock itself independently • Removes need for Fast Control and Timing System (no one signed up for this anyway) • Trigger sent after event, not before • No assumptions about beam line signals available • Cosmics can be done in an identical way • Requires removal of double particle events (with second particle after trigger) offline • Double particle events with second particle before trigger can be vetoed in trigger logic • Trigger on external detector (e.g. scintillators), not calorimeter • Removes need for trigger electronics at VFEs • Minimum bias; easy to simulate Paul Dauncey - DAQ
Trigger: control requirements • Need to hold off further triggers until finished previous event • Sample-and-hold at VFE cannot be released until data digitised • DHCAL has no such requirement; could handle multiple events • Does not necessarily require data read out through VME; ECAL boards can buffer up to 2k events • Need to enable configuration of different trigger types • PC controlled, not require recabling • Need to read out some event data for trigger itself • Record type of trigger which caused event • Offline second particle detection and removal • Must be only one control interface to PC in whole system • No way to synchronise different PCs to accuracy required Paul Dauncey - DAQ
Trigger: distribution path • Assume someone provides trigger scintillators with logic in NIM • E.g. DESY provides two crossed scintillators, 11cm2, with PMTs, HV, discriminators and NIM logic in beam line • Trigger routed through one of ECAL readout boards • Provides trigger replication,distribution and timing adjust for HCAL and beam monitoring • Provides trigger VME interface for control Trg Logic ECAL trg board NIM-to-LVDS LVDS-to-NIM Beam monitor ECAL other boards LVDS? HCAL NIM? Paul Dauncey - DAQ
Trigger: control implementation • VME control logic imbedded in FPGA on ECAL readout board • Identical firmware in all FPGAs; only activated on one board Paul Dauncey - DAQ
Trigger: HCAL SiPMs? • AHCAL extremely likely to use silicon PMs for readout • Possibly in parallel with APD readout • Major complication for trigger • Intrinsic signal shape around 30ns; too short for “after event” trigger distribution • Noise rate ~ 2MHz at lowest energies • Shaping signal to peak at ~ 150ns may introduce too much noise to distinguish individual channel peaks; needs study • May have to have “pre-trigger” signal • Hope real trigger arrives within ~10ns of signal peak • Straightforward if beam line signals available to make pre-trigger • Otherwise, potentially, efficiency of only ~ 5% • Cosmics are still a problem • One other possibility is “self-trigger” • Issues of bias and trigger distribution from HCAL VFEs Paul Dauncey - DAQ
Beam monitoring and slow controls • Need some tracking within the beam line • Resolution should be much better than pad size ~1 cm • May be provided in beam line, e.g. DESY has an old Zeus silicon strip detector telescope (but no one has succeeded in reading it out recently) • If not, we need to supply one (but no one signed up to do this) • Tracking must be read per event; technology/format unknown • If mixed particle beam, need particle ID also • Cherenkov, TOF; also provided? • Slow controls • “Control” less important; monitoring is what is needed • Read out supply voltages and translation stages; also temperature, pressure? • Low rate of read out needed; ~ 1 Hz? • Independently of event readout • Again readout technology/format unknown (to us); is anything defined for this yet? Paul Dauncey - DAQ
DAQ: requirements • Event rate of ~ 1 kHz during spill, ~ 100 Hz average • DHCAL may require rate limited to ~300 Hz during spill • Event sizes of up to 40 kBytes • Read all data without zero suppression (except DHCAL) • Implies 40 MBytes/s peak readout without buffering; this exceeds maximum rate within a single VME crate • Read out ECAL, (A/D)HCAL, trigger, beam line monitoring • (Potentially) separate crates, (potentially) different technologies • Flexible configuration to work in several beam lines • Minimise dependence on external networking, etc. • Also must be able to run ECAL and HCAL separately during initial tests • Need to take many different types of runs • Beam data, beam interspersed with pedestals, calibration, cosmics, etc. Paul Dauncey - DAQ
DAQ: concept • Many unknowns; keep flexible • Plug-and-play components to be bolted together later as required • Simple and robust data structure • Keep all information in one place; run file is self-contained • All configuration data used stored within file • All slow controls readout stored within file • Eases merge with simulation and analysis formats • Allow arbitrarily complex run structure • Number and type of configurations completely flexible within a run • Triggers within and outside of spills can be different and can be identified offline • Implementation • POSIX-compliant (mostly!) C++ running on Linux PCs • VME using SBS 620 VME-PCI interface, VME software based on HAL • ROOT for graphics and (possibly) eventual persistent data storage Paul Dauncey - DAQ
DAQ: overview • Multi-PC system driven by common run control PC • Each PC is independent; can have separate technology (VME, PCI, CAMAC, etc) • PC configuration can be changed easily; single VME crate readout for separate system tests possible. • Multiple tasks could be run on one PC; e.g. run control, ECAL and event build • Prefer PCs outside radiation area if possible • Have own hub and network (cost?) or rely on network infrastructure at beam line? Paul Dauncey - DAQ
DAQ: data structure • Need to store C++ objects in type-safe but flexible way • “Record” (generalised event; includes StartRun, EndRun, Event, SlowControls, etc) and “subrecords” (for ECAL, HCAL, etc.) • Simple data array with identity for run-time type-checking • Type-checking through simple id-to-class list • Prevents misinterpretation of subrecord • Record and subrecord handling completely blind to contents • Arbitrary payload class (but cannot have virtual methods) Paul Dauncey - DAQ
DAQ: state machine • All parts of DAQ driven round finite state machine • Nested layers within run allow arbitrary numbers of configurations • E.g. allows beam data, pedestals, beam data, pedestals… • E.g. allows calibration at DAC setting 0, setting 1, setting 2… Paul Dauncey - DAQ
DAQ: data transfer • Record movement via standardised interface (DIO) • Within PC; each interface driven by separate thread, copy pointer only • PC-to-PC; via socket (with same interface), copy actual data • Standardised interface allows configuration of data handlers to be easily changed • Flexibility to optimise to whatever configuration is needed; e.g. ECAL only • Several building blocks needed (and exist) Paul Dauncey - DAQ
DAQ: topology • For tests, assume worst case; each subsystem (ECAL, HCAL, beam monitoring and slow controls) read out with separate PC • Require one socket-socket branch for each • Each branch can read out separate technology (VME, PCI, etc) • Monitor does not necessarily sample all events; its buffer allows through events only when spare CPU available Paul Dauncey - DAQ
DAQ: status • First version of data structure software exists • Records and subrecords; loading/unloading, etc. • Arbitrary payload (templated) for subrecords • First version of data transport software exists • Buffers, copiers, mergers, demergers, etc. • Arbitrary payload (templated) with specialisation for records • First version of run control software exists • Both automatic (pre-defined) and manual run structures • VME hardware access working • SBS 620 VME-PCI interface board installed in borrowed VME crate • Using Hardware Access Library (CERN/CMS) • These work together • Sustained rates achieved depend critically on PCs, network between the PCs on the different branches, compiler optimisation, inlining, etc; a lot of tuning needed Paul Dauncey - DAQ
DAQ: major items still to be done • Write data and configuration classes • Until VME board interfaces defined, cannot finalise data format for event data or for board configuration data • Output data format • Currently have ASCII and binary (endian specific) output formats • Obvious choice would be ROOT; actual objects stored, can be used interactively, easy graphics, machine-independent, etc. • However, HCAL people want to use LCIO instead; feasibility/limitations under investigation • Will need to convert whatever raw data format is used to zero-suppressed analysis data format offline in “reconstruction” step • Online monitoring • Done via ROOT memory map facility (TMapFile); allows interactive real-time histogramming • Need to write all code to actual define and fill histograms Paul Dauncey - DAQ
DAQ alternatives: MIDAS? XDAQ? • MIDAS (PSI) • No experience of using this in UK • Written for ~MByte data rates, ~100 Hz event rates, single PC systems • Limited state diagram; no ability to take different types of events in run • A lot of baggage (databases, slow controls); more complex than required • C, not C++, so less natural interface downstream (and not type-safe) • XDAQ (CERN/CMS) • Significant experience of this in Imperial; useful to have experts on hand • Optimised for CMS; no beam spill structure and asynchronous trigger and readout but easily deals with CALICE event rates and data sizes • Includes HAL automatically so (should be) simple to retrofit later • Deserves further investigation • If moving to an existing system, XDAQ seems more suitable (?) • Beware of “3am crash” issue; it is hard to debug code written by other people in a hurry… Paul Dauncey - DAQ
Summary • Trigger • Several uncertainties still, particularly with HCAL SiPMs • Central control and distribution within ECAL • Beam monitoring and slow controls • Concept of how to include these • What they physically are still very uncertain • DAQ • Prototype DAQ system exists • Allows multiple PCs so partitionable • Other existing DAQ systems could/should be studied further Paul Dauncey - DAQ