1 / 30

Data Acquisition and Trigger of the CBM Experiment

Data Acquisition and Trigger of the CBM Experiment. Volker Friese GSI Darmstadt. International Conference on Triggering Discoveries in High Energy Physics 11 September 2013, Jammu, India. A very short talk. Overview of CBM see my presentation of yesterday ‘ s CBM Trigger

joanna
Download Presentation

Data Acquisition and Trigger of the CBM Experiment

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Data Acquisition and Triggerof the CBM Experiment Volker Friese GSI Darmstadt International Conference on TriggeringDiscoveries in High EnergyPhysics 11 September 2013, Jammu, India

  2. A very short talk • Overview of CBM • see my presentation of yesterday‘s • CBM Trigger • there will be none • Thanks for your attention V. Friese

  3. Reminder: experimental setup ECAL TOF TRD RICH PSD Electron + Hadron setup Measurement of hadrons (including open charm) and electrons Core tracker: STS (silicon strip detectors) Micro-vertex detector for precision measurement of displaced vertices STS+MVD V. Friese

  4. Reminder: experimental setup Muon setup absorber + detectors Measurement of muons (low-mass and charmonia) in active absorber system STS+MVD V. Friese

  5. The Challenge • typical CBM event: about 700 charged tracks in the acceptance • strong kinematical focusing in the fixed-target setup: high track densities • up to 107 of such events per second • find very rare signals, e.g., by decay topology, in such a background V. Friese

  6. ...hold it for a second... 10 MHz event rate with heavy ions? You‘re crazy. Past and current experiments run with several Hz to several 100 Hz... But in particlephysics, theyhavemuchhigherrates. Yes, but oureventtopologyismuchmorecomplex... So whatyouthinkdefinesthe rate limit? Not in fixedtarget... The machine? Youcanbuild fast ones. Just stayawayfrom large driftchambers.... Detectors? Can also be fast. Just invest a littlemoreandsupply proper cooling.... Electronics? It‘s the data processing, stupid! Whatthen? V. Friese

  7. Trigger Considerations • Signatures vary qualitatively: • local and simple: J/ψ->μ+μ- • non-local and simple: J/ψ -> e+e- • non-local and complex: D,Ω->charged hadrons • For maximal interaction rate, reconstruction in STS is always required (momentum information), but not necessarily of all tracks in STS. • Trigger architecture must enable • variety of trigger patterns (J/ψ: 1% of data, D mesons: 50% of data) • multiple triggers at a time • multiple trigger steps with subsequent data reduction • Complex signatures involve secondary decay vertices; difficult to implement in hardware. • Extreme event rates set strong limits to trigger latency. V. Friese

  8. Running Conditions Detector, FEE and DAQ requirements are given by the most extreme case Design goal: 10 MHz minimum bias interaction rate Requires on-line data reduction by up to 1,000 V. Friese

  9. CBM Readout Concept Finite-size FEE buffer: latency limited throughput limited V. Friese

  10. Consequences • The system is limited only by the throughput capacity and by the rejection power of the online computing farm. • There is no a-priori event definition: data from all detectors come asynchroneously; events may overlap in time. • The classical DAQ task of „event building“ is now rather a „time-slice building“. Physical events are defined later in software. • Data reduction is shifted entirely to software: maximum flexibility w.r.t. physics V. Friese

  11. The Online Task Online Data Processing CBM FEE 1 GB/s 1 TB/s at max. interaction rate Mass Storage V. Friese

  12. CBM Readout Architecture DAQ: data aggregration time-slice building (pre-processing?) FLES: event reconstruction and selection V. Friese

  13. Components of the read-out chain • Detector Front-Ends • eachchannelperformsautonomoushitdetectionandzerosuppression • associate absolute time stampwithhit, aggregratedata • data push architecture • Data Processing Board (DPB) • performchannelandsegmentlocaldataprocessing • featureextraction, time sorting, datareformatting , merginginputstreams • time conversionandcreationofmicroslicecontainers • FLES Interface Board (FLIB) • time indexingandbufferingofmicroslicecontainers • datasentto FLES isconcise: noneedfor additional processingbeforeintervalbuilding • FLES Computing Nodes • calibrationand global featureextraction • fulleventreconstruction (4-d) • eventselection V. Friese

  14. Data Processing Board V. Friese

  15. FLES Interface Board (FLIB) • PCIeadd-on boardtoconnect FLES nodesand DPB • Tasks: • consumesmicroslicecontainersreceivedfro DPB • time indexingof MC forintervalbuilding • transfer MCs andindexto PC memory • Currentdevelopmentversion: • testplatformfor FLES hardwareandsoftwaredevelopments • readoutdevicefortestbeamsand lab setups • Requirements: • fast PCIeinterfaceto PC • high numberofoptocal links • large buffermemory • Readoutfirmwarefor Kintex-7 basedboardunderdevelopment V. Friese

  16. FLES Architecture • FLES isdesignedas HPC cluster • commodityhardware • GPGPU accelerators • Total input rate ~1 TB/s • Infinibandnetworkforintervalbuilding • high throughput, lowlatency • RDMA daratransfer, convenientforintervalbuilding • most-usedsysteminterconnect in latest top-50 HPC • Flat structure; inputnodesdistributedoverthecluster • fulluseofInfinibandbandwidth • inputdataisconcise, noneedforprocessing bevor intervalbuilding • Decision on actualhardwarecomponentsaslateaspossible V. Friese

  17. Data Formats V. Friese

  18. FLES location V. Friese

  19. Online reconstruction and data selection • Decision on interestingdatarequires (partial) reconstructionofevents: • trackfinding • secondaryvertexfinding • furtherreductionby PID • Throughputdepends on • capacityof online computingcluster • performanceofalgorithms • Algorithms must befullyoptimisedw.r.t. speed, whichincludesfullparallelisation • tailoredtospecifichardware (many-core CPU, GPU) • beyondscopeofcommonphysicist; requiressoftwareexperts V. Friese

  20. Reconstructionbackbone: CellularAutomaton in STS • cells: track segments based on track model • find and connect neighbouring cells (potentially belonging to the same track) • select tracks from candidates • simple and generic • efficient and very fast • local w.r.t. data and intrinsically parallel V. Friese

  21. CA performance STS track finding with high efficiency on 10 ms level V. Friese

  22. CA scalability Good scaling beviour: well suited for many-core systems V. Friese

  23. CA stability Stable performance also for large event pile-up V. Friese

  24. Many more tasks for online computing • Track finding in STS • Track fit • Track finding in TRD • Track finding in Muon System • Ring finding in RICH • Matching RICH ring, TOF hit and ECAL cluster to tracks • Vertexing • Analysis and data selection V. Friese

  25. Parallelisation in CBM reconstruction V. Friese

  26. Summary • CBM will employ no hardware trigger. • Self-triggered FEE will ship time-stamped data as they come to DAQ. • DAQ aggregrates data and pushes them to the FLES. • Transport containers are micro slices and timeslices. • Online reconstruction and data selection will be done in software on the FLES (HPC cluster). • Fast algorithms for track finding and fitting have been developed; parallelisation and optimisation of entire reconstruction chain is in good progress. Material provided by J. de Cuveland, D. Hutter, I. Kisel, I. Kulakov and W. Müller. Thanks! V. Friese

  27. Backup V. Friese

  28. Microslices V. Friese

  29. Microslices V. Friese

  30. FLES test setup V. Friese

More Related