1 / 23

DAQ for the VELO testbeam

This presentation discusses the DAQ system used during the VELO testbeam campaign in 2006, specifically focusing on the ACDC3 edition. The presentation covers the hardware and software used, as well as the challenges faced during data acquisition. Contributions from various individuals are acknowledged.

beckyp
Download Presentation

DAQ for the VELO testbeam

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. DAQ for the VELO testbeam Vertex 2007 16th International Workshop on Vertex detectors September 23-28, 2007, Lake Placid, NY, USA Niko Neufeld, CERN/PH

  2. Thanks • I am (really) only the messenger: • Many thanks to (without order): Paula Collins, Marina Artuso, Kazu Akiba, Jan Buytaert, Aras Papadelis, Lars Eklund, Jianchun Wang, Ray Mountain from the LHCb VeLo team • Artur Barczyk, Sai Suman from the LHCb Online team • And apologies to all those not listed here • But of course you’re welcome to shoot the messenger! Misunderstandings, errors are entirely my fault DAQ for the VELO testbeam, N. Neufeld

  3. VeLo testbeam campaign 2006: ACDC • Alignment Challenge Detector Commissioning was held in three major editions • ACDC1 in the lab (one week in April 2006) • ACDC2 (2 ½ weeks August 2006): in the H8 test-beam area, 3 modules, most of controls & DAQ, 1 TB of data, all non-zero suppressed • ACDC3 (2 ½ weeks November 2006) in the H8 test-beam facility in the CERN north area • Building on each other’s success all were very important: focus of this presentation will be ACDC3 DAQ for the VELO testbeam, N. Neufeld

  4. ACDC3 • ½ of the right Velo Half assembled (10 Modules) • Designed to be a full test of the Experiment Control System (ECS) electronics and DAQ • Used boards from the final production. HV and LV supplies were the same as in the final setup • Prototype firmware (FPGA) and software • 6 “slices” of the Velo operated and read out simultaneously DAQ for the VELO testbeam, N. Neufeld

  5. VeLo Slice Inter Locks Temperature Board Production temp cable CAN USB repeater boardproduction TELL1 data DAQ GBE Drivers 15m cable TFC short long Arx ECS TriggerSupervisor kaptons LV ctrl cable CTRL Board Production SPECS Master LV cable LV CAEN HV cable HV ISEG HV Patch box Drawing by K. Akiba,P. Jalocha & L. Eklund DAQ for the VELO testbeam, N. Neufeld

  6. The LHCb DAQ Detector VELO ECal HCal Muon RICH ST OT L0 Trigger ~ 320 readout boards up to 500 MB/s output L0 trigger FEElectronics FEElectronics FEElectronics FEElectronics FEElectronics FEElectronics FEElectronics TFC System LHC clock TELL1 TELL1 TELL1 UKL1 TELL1 TELL1 TELL1 Front-End MEP Request 660 links40 GB/s Experiment Control System (ECS) READOUT NETWORK Event building Up to 50 subfarmswith up to 44 servers each SWITCH SWITCH SWITCH SWITCH SWITCH SWITCH SWITCH CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU STORAGE HLT farm Event data Timing and Fast Control Signals Control and Monitoring data DAQ for the VELO testbeam, N. Neufeld

  7. Readout over copper Gigabit Ethernet(1000 BaseT,> 3000 links) Total event-size zero-suppressed 35 kB Event-building over switching network Push protocol: readout-boards send data as soon as they are ready no hand-shake! no request! rely on buffering in the destination Synchronous readout of entire detector @ 1 MHz ~ 320 TELL1 boards individual fragment < 100 bytes pack several triggers into a Multi Event Packet (MEP) Synchronisation of readout boards (and detector electronics) via standard LHC TTC(timing & trigger control) system Trigger master in LHCb: the Readout Supervisor (“Odin”) Global back-pressure mechanism (“throttle”) LHCb DAQ fact-sheet DAQ for the VELO testbeam, N. Neufeld

  8. ACDC3 DAQ lbh8lx01 lbh8lx02 Gb eth. TTC ODIN optical CCPC Gaudi EB Analog Data cables 15m TELL1 Ethernet (data) TTCrx HP5412 Gb eth. CCPC Long Kaptons 6 Control Board Buff Buff 20m TTCrx Hughin ECS LV Buff Buff OR Repeater Boards 16 Drawing by K. Akiba DAQ for the VELO testbeam, N. Neufeld Temperature Board

  9. ACDC3 Scintillation Trigger A0 A1 B1 Type B 20 3.3 Type A 1.7 10 A2 B2 3.8 1/4  thick scintillator Light guide 30 15 5 1 15 15 3 PMT Unit: cm Unit: cm Drawing by JC Wang & R Mountain Trigger = A0 (A1A2)  (B1B2) efficiency~90% • For straight through use only 2 upstream ones (in AND). • For interaction with detector use 2 upstream (in AND) .AND. (B1 .OR. B2). • For interaction with target we use 2 upstream (in AND) .AND. B2 and an extra scintillator (in OR). • 1 spill is about 4s + no beam for 12s. DAQ for the VELO testbeam, N. Neufeld

  10. Readout Board TELL1 • Originally developed for the VeLo, now common board for (almost) all of LHCb • 64analogue copper inputs • Parallelised FPGA processing • Power-only backplane • 4 x 1 Gigabit Ethernet output • Control via embedded i486 Microcontroller on separate LAN for control and monitoring • 84 for the complete VeLo • ~ 320 for all of LHCb DAQ for the VELO testbeam, N. Neufeld

  11. Running the ACDC3 DAQ • 12 read-out boards Tell1s • Non Zero Suppressed Data: 4~5 kB x12 per event: • 2.0 kEvent/s max DAQ with 1 PC (1 Gigabit link can handle 120 MB/s). • Too many packets to route, buffering problem in the network: • “LabSetup” switch is insufficient • Problem typical for push-protocol --> next slide • Use production equipment from the main DAQ • HP5412 handles the ACDC3 packet rate without problems • now installed in Point 8 • for full LHCb will need evenmore powerful equipment (exists) DAQ for the VELO testbeam, N. Neufeld

  12. Congestion 1 3 • "Bang" translates into random, uncontrolled packet-loss • In Ethernet this is perfectly valid behavior and implemented by very cheap devices • Higher Level protocols are supposed to handle the packet loss due to lack of buffering • This problem comes from synchronized sources sending to the same destination at the same time 2 2 Bang 2 DAQ for the VELO testbeam, N. Neufeld

  13. DAQ Software Data input Shared Memory Event Builder • Data are copied only once • Communication between event-builder, trigger process and disk-writer via shared memory • Use the standard LHCb (offline) software framework: GAUDI MEP EventFormatter EVENT Moore/Vetra RESULT Sender/DiskWriter Data output (file) DAQ for the VELO testbeam, N. Neufeld

  14. Getting “real” • Running non stop for 3 weeks DAQ for the VELO testbeam, N. Neufeld

  15. To trust is good to control is better ODIN CCPC Analog Data cables 15m TELL1 TTCrx TTC optical CCPC PVSS Repeater Boards Long Kaptons 6 Control Board Buff Buff 20m TTCrx ECS LV lbh8wico01 lbh8lxco01 SPECS Buff Buff SPECS CAEN LV Lbtbxp04 16 Temperature Board ISEG HV Ethernet (control) Temperature interlock USB-CAN USB-CAN ELMB CERN Network DAQ for the VELO testbeam, N. Neufeld

  16. (Run-) Controls Software • Velo Front-end, Tell1s, Trigger and Velo Temperatures controlled by PVSS, the backbone of LHCb’s Experiment Control System (ECS) • Some slow controls were not yet integrated: • ISEG HV and CAEN • Cooling monitoring (LabView) • Event-builder and data-transfer • scripts (worked reasonably well because “farm” consisted of maximum of two PCs) • TELL1 • scripts (was painful, but PVSS integration was late :-() DAQ for the VELO testbeam, N. Neufeld

  17. PVSS in action Velo PVSS: Temperatures display and FE-Configuration Online PVSS: TELL1 Control FSM DAQ for the VELO testbeam, N. Neufeld

  18. Summary • ACDC DAQ worked well - first “field” use of LHCb’s data acquisition • Discovered and fixed at lot of small problems in soft- and hardware • no show-stoppers for LHCb! • Learned a lot of useful lessons • Acquired around 3.5 TB of data = ~ 60 Million Events • Nominal DAQ with non-ZS events and 12 boards (up to 4 kHz, typically 2 kHz) • DAQ tested at rates up to 100 kHz with ZS (1 tenth of nominal rate, with a 1% network and farm) --> scalability is ok! • Main problem: (after everything else had been fixed) • rate to storage • local disk can only handle ~ 30 MB/s, • rate to CERN tape-storage (CASTOR) limited 30 to 50 MB/s • rate into a single PC (120 MB/s) DAQ for the VELO testbeam, N. Neufeld

  19. And now…on to the LHCb cavern

  20. 372 long distance cables DAQ for the VELO testbeam, N. Neufeld

  21. 84 TELL1s DAQ for the VELO testbeam, N. Neufeld

  22. 2 x 7 Control boards DAQ for the VELO testbeam, N. Neufeld

  23. VeLo and its DAQ ready to go!

More Related