1 / 28

LHCb Computing Status Report Meeting with LHCC Referees October 19th, 1999

This report discusses the outcomes of the LHCb Frontend/DAQ Workshop, including the establishment of a common language, agreement on specifications, and the need for clarification of ideas. It also covers the progress on the Readout Units (RU) project and the use of fieldbus for remote control/monitoring. The report concludes with plans for the next production of simulated events and the development of a detailed specification for the Readout Supervisor (RS).

jonathano
Download Presentation

LHCb Computing Status Report Meeting with LHCC Referees October 19th, 1999

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. LHCb Computing Status ReportMeeting with LHCC RefereesOctober 19th, 1999 John Harvey CERN/ EP-ALC

  2. Outline • Frontend / DAQ workshop • DAQ - TFC, Readout Unit, Readout Network • ECS - Fieldbus, OPC, SCADA • Software - Migration to GAUDI • Computing Model Studies • Status and plans for next production of simulated events

  3. Frontend DAQ Workshop • An LHCb workshop was held 12-14 October at CERN to discuss issues of Front-End Electronics and DAQ. • 35 participants from all the various subsystems and institutes • A great deal of discussion and a positive outcome : • establishing a common language between all participants • underlining advantages of taking a uniform approach and adopting uniform solutions whenever possible • agreement on the need to review specifications before building hardware (qualification) • global decisions (choice of parameters) to be easily/widely visible • areas where clarification of ideas are needed were identified • Agenda and slides : http://lhcb.cern.ch/electronics/meetings/FE-DAQ_workshop_october99/agenda.htm

  4. Trigger/DAQ Architecture Trigger/DAQ Architecture LHCb Detector Data rates VDET TRACK ECAL HCAL MUON RICH 40 MHz Level 0 Trigger 40 TB/s Fixed latency 4.0 ms Front-End Electronics 1 MHz Timing & Fast Control L0 1 TB/s 40 kHz L1 Level 1 Trigger Variable latency <2 ms Front-End Multiplexers (FEM) LAN 1 MHz Front End Links RU RU RU Read-out units (RU) Throttle Read-out Network (RN) 4 GB/s Trigger Level 2 & 3 Event Filter SFC SFC Sub-Farm Controllers (SFC) Variable latency L2 ~10 ms L3 ~200 ms Control & Monitoring Storage 20 MB/s CPU CPU CPU CPU

  5. Timing and Fast Control • Responsibilities • transmission of LHC clock, L0 and L1 triggers to front-end electronics • monitor state of buffers at all stages of readout - throttle trigger to avoid overflows • react to control commands such as ‘resets” • support partitioning • The Readout Supervisor (RS) is the heart of the TFC system. • It must model state of frontend electronics and throttle trigger according to rules • It must react to control commands, DAQ throttle, resets etc. • It must encode channel A (L0) and channel B (L1, resets) for TTC • Effort has been started to make a detailed specification of RS functions • next step is to define use cases to be able to specify format of channel B • types, frequency and timing of resets, commands to pulse detectors etc. • Warsaw group starting to work on prototype for RS switch • Effort will shortly be put on making a detailed specification of RS itself

  6. TFC : Signal Generation and Treatment LHC Clock L1 TTCtx L0 TTCtx Clk Fanout L1 L1 L1 L0 L0 L0 Readout Supervisor Readout Supervisor Switch SDn TTCtx SD1 TTCtx SD2 TTCtx L-0 L-1 Local trig. Local trig. Readout Supervisor Readout Supervisor LHC Turn Signal LHC Turn Signal LHC Turn Signal gL0 gL1 gL0 gL1 gL0 gL1 RD12

  7. Trigger/DAQ Architecture LHCb Detector Data rates VDET TRACK ECAL HCAL MUON RICH 40 MHz Level 0 Trigger 40 TB/s Fixed latency 4.0 ms Front-End Electronics 1 MHz Timing & Fast Control L0 1 TB/s 40 kHz L1 Level 1 Trigger Variable latency <2 ms Front-End Multiplexers (FEM) LAN 1 MHz Front End Links RU RU RU Read-out units (RU) Throttle Read-out Network (RN) 4 GB/s Trigger Level 2 & 3 Event Filter SFC SFC Sub-Farm Controllers (SFC) Variable latency L2 ~10 ms L3 ~200 ms Control & Monitoring Storage 20 MB/s CPU CPU CPU CPU

  8. Front-End Multiplexing/Readout Unit • There are ~2000 sources of data from L1 electronics and these have to be combined onto ~200 links and fed to the Readout Units (RU) • There is agreement to aim to have a common module for this based on RU • The RU project is well underway. A number of issues were identified and some resolved • single fragment per event (simple protocol, performance) • maximum size of fragments to be determined - issue is cost of fast memory • adopted a single data frame format for transport through various readout stages • data payload to be self describing • project starting to investigate use of fieldbus for remote control/monitoring • A first prototype of RU should be ready by the beginning of 2000 • Auxiliary equipment and software to allow testing is also being built

  9. Trigger/DAQ Architecture Trigger/DAQ Architecture LHCb Detector Data rates VDET TRACK ECAL HCAL MUON RICH 40 MHz Level 0 Trigger 40 TB/s Fixed latency 4.0 ms Front-End Electronics 1 MHz Timing & Fast Control L0 1 TB/s 40 kHz L1 Level 1 Trigger Variable latency <2 ms Front-End Multiplexers (FEM) LAN 1 MHz Front End Links RU RU RU Read-out units (RU) Throttle Read-out Network (RN) 4 GB/s Trigger Level 2 & 3 Event Filter SFC SFC Sub-Farm Controllers (SFC) Variable latency L2 ~10 ms L3 ~200 ms Control & Monitoring Storage 20 MB/s CPU CPU CPU CPU

  10. Event Building • After discussion with representatives from industry we are convinced that a switching network supporting 4 GB/s sustained rate will not be a problem to acquire at affordable cost on the timescale of the LHCb DAQ. • Several technologies are in principle available • Gb Ethernet or 10Gb Ethernet • Myrinet • latest estimate of cost per port for 1 Gbps technology is now 500$ - 1000$ • In this light we see no reason to deviate from our ‘full readout protocol’ as described in the Technical Proposal. • We are currently concentrating on studying ‘intelligent Network Interfaces’ that would allow to perform the assembly of the incoming fragments locally and only send complete events to the host CPU. A project based on an ATM card from IBM has been started beginning of October

  11. Experiment Control System LHC-B Detector Data rates VDET TRACK ECAL HCAL MUON RICH 40 MHz Level 0 Trigger 40 TB/s Fixed latency 4.0 ms Front-End Electronics 1 MHz Timing & Fast Control L0 1 TB/s 40 kHz L1 Level 1 Trigger Variable latency <2 ms Front-End Multiplexers (FEM) LAN 1 MHz Front End Links RU RU RU Read-out units (RU) Throttle Read-out Network (RN) 4 GB/s Trigger Level 2 & 3 Event Filter SFC SFC Sub-Farm Controllers (SFC) Variable latency L2 ~10 ms L3 ~200 ms Control & Monitoring Storage 20 MB/s CPU CPU CPU CPU

  12. Control/Monitoring structure Technologies Configuration DB, Archives, Logfiles, etc. Storage Supervision SCADA WAN LAN Process Management . . . OPC LAN Communication Protocols Controller/ PLC Other systems (LHC, Safety, ...) VME PLC/ PC FieldBus Field Management Field buses Experimental equipment Sensors/devices SCADA = supervisory control and data acquisition OPC = OLE for Process Control PLC = Programmable Logic Controller Field buses = CAN, ProfiBus, WorldFip, ...

  13. LHCb Priorities • Follow standards at all levels • LHCb, CERN, HEP, Industry • First priority is to adopt guidelines for choice of fieldbus and integration with hardware • Milestone : end of this year • Use PLCs for specialised domains (gas, magnet…), where safety is issue • Use PCs for other domains • cheaper, programming flexibility • Adopt OPC • JCOP recommendation (Sept ‘99) • Adopt SCADA • JCOP tender under discussion • timescale target is ~ end 2000 Layers Technologies GUI/MMI low SCADA High level services Sub-system supervision Network comm. Device behavior OPC Priority Software Interface Communication Protocols Process control I/O Interfaces PLC / PC Field bus Field buses Field Bus controller Sensors/devices Hardware resources high

  14. Fieldbus • Collect requirements in terms of • #boards, #bytes/board, frequency and speed • types of boards and interface to electronics • technical constraints (radiation environment, power consumption, space) • issue questionnaire (next week) • Match requirements to fieldbus capabilities and produce LHCb guidelines • Concrete projects to get hands-on experience : fieldbus controller for RU

  15. Supervisory Software (SCADA) • LHCb requirements : • The number of different device types is of the order of a 100 • The number of devices is of the order of 17’000 • The number of parameters is of the order of n•107 • The number of monitored quantities is of the order of n•105 • Implications for choice of SCADA system : • A tag-oriented system is unrealistic if each parameter is an entry. • We need a namespace hierarchy (Device->Parameters). • For highly repetitive items (e.g. individual detector channels in an electronics board) arrays are needed (don’t want to name each of them individually). • Short term SCADA - Bridgeview • Longer term - adopt scalable device-oriented commercial product (JCOP tender) • Invest in OPC to ease transition

  16. Migration to GAUDI • Final objective: • Produce a complete set of fully functional data processing applications using exclusively OO technology • Sub-objectives: • Provide a fully functional framework (GAUDI) • Assemble new and old algorithms into a single and complete suite of data processing applications. Be able to run productions. • Convert all the existing FORTRAN code to C++

  17. Possible strategies C++ Fortran SICb ? 1 Gaudi Fast translation of Fortran into C++ SICb 2 Gaudi Wrapping Fortran SICb 3 Gaudi Framework development phase Transition phase Hybrid phase Consolidation phase

  18. Framework development phase • At the end of this phase the GAUDI framework should be functionally complete • Data access services • Generic event model • Generic detector description model • Data visualization • Basic set of services • Develop some physics algorithms to prove architecture concept • We started this phase one year ago • We expect to be completed by middle of November 1999 with the release of v3 of GAUDI

  19. Transition phase • At the end of this phase we should be able to reconstruct and analyze simulated data within the GAUDI framework. The Monte Carlo data production will still be done using SICb. • Incorporate reconstruction and analysis parts of SICb in the GAUDI framework - wrap FORTRAN code • Analyse SICb to identify all modules, their inputs and outputs • Develop a complete OO event data model • Write converters to allow access to data in both formats • Development of new algorithms can proceed within GAUDI • Caveats • lot of work to make converters in both directions • we could discover technical difficulties (size, commons, initialization,…)

  20. Transition phase -2 DST Generator Det. Simul. Recon- struction + Analysis Histo Ntuples MC Hits SICb GEANT 3 Framework GAUDI Framework

  21. Commons Bank 2 Bank 1 Bank4 Obj 4 Obj 3 Obj 2 Transition Phase - 3 DST FA FB FC Generator Det. Simul. (Sicb) ZEBRA MC Hits Cnv Cnv GAUDI store Histo Ntuples C++A C++B Reconstruction + Analysis

  22. Hybrid Phase • One single program with FORTRAN and C++ cooperating to produce physics results. • Replace wrapped FORTRAN code incrementally. • At the end of this phase we should be able to retire the FORTRAN compiler and libraries • Already known problems: • Two different detector descriptions. Difficult to maintain. • Output file format. Which one to choose? • Different set of input “card files”. • Large memory needs for data and code. • The hybrid phase should be as short as possible to minimize pain.

  23. Consolidation phase • Complete the new detector description • Re-iterate with the O-O Event Model • Re-engineer some of algorithms • Incorporate Geant4 with GAUDI to make framework for new simulation program • …..

  24. Planning 1998 1999 2000 Qtr 3 Qtr 4 Qtr 1 Qtr 2 Qtr 3 Qtr 4 Qtr 1 Qtr 2 Qtr 3 Qtr 4 Architecture Design Gaudi Development v1 Gaudi Development v2 Gaudi Development v3 Framework Functional Analysis Sicb Transition phase Production program Hybrid phase

  25. Computing Model Studies • We are revising our computing requirements • event sizes and storage requirements • triggering cpu requirements (optimisation strategies) • availability of calibration/alignment constants • simulation requirements • We are looking at 'use cases’for the analysis of selected physics channels (p+p-, mm, J/j) • Data retained at each stage • Processing required at each stage • Frequency of processing cycles • Generate a baseline model for distribution and role of experiment-specific, CERN and regional computing facilities • Timescales fixed by impending Hoffmann Review

  26. SICb Status • Generators: Pythia 6.125 + QQ (CLEO) for decays • Full GEANT simulation • Realistic detector desciption: • WARM magnet design • realistic beampipe model • calorimeters,muons and RICHes follow current optimisation • Reconstruction: • no pattern recognition in tracking • full pattern recognition in RICHes

  27. PLANs up to end 1999 • Trigger optimisation CERN (PCSF) • 200k mbias - 4 different settings(luminosity, generator) • 20k b ->mX • 20k b -> eX • 20k B0d -> p+p- • 20k B0d -> J/j(mm)K0s • 20k B0d -> J/j(ee) K0s • Background studies with GCALOR at Lyon • 50k mbias • Physics production • 500k b inclusive at Rutherford lab (PCSF) • 100k other physics channels at CERN (PCSF)

  28. Training • Formal training through CERN Technical Training services • Hands-on Object-Oriented Analysis, Design and Programming with C++ • C++ for Particle Physicists (Kunz) • at least 40 people followed courses • Follow-up training • books - Design Patterns (Gamma), Software Design (Lakos),.. • Learning on the job - we already do following : • design discussions where developers present their designs for review • tutorials on new GAUDI releases during LHCb software weeks • Learning on the job - ideas for future include : • code walkthroughs • document patterns (those well-tried and successful and those that failed) • Consultancy and mentoring • have GAUDI team spend 50% of their time helping algorithm builders

More Related