1 / 9

Architecture and Dataflow Overview

Architecture and Dataflow Overview. LHCb Data-Flow Review September 2001 Beat Jost Cern / EP. Overall Architecture . Data. LHC. b. Detector. rates. VELO TRACK ECAL HCAL MUON RICH. Functional Components Timing and Fast Controls (TFC) Front-End Multiplexing (FEM)

terry
Download Presentation

Architecture and Dataflow Overview

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Architecture and Dataflow Overview LHCb Data-Flow ReviewSeptember 2001 Beat Jost Cern / EP

  2. Overall Architecture Data LHC b Detector rates VELO TRACK ECAL HCAL MUON RICH • Functional Components • Timing and Fast Controls (TFC) • Front-End Multiplexing (FEM) • Readout Unit (RU) • Readout Network (RN) • Sub-Farm Controllers (SFC) • CPU Farm • External Interfaces/Sub-Systems • Front-End Electronics • Triggers (Level-0 and Level-1) • Accelerator and Technical Services • (Controls & Monitoring) 40 MHz 40 TB/s Level 0 1 MHz Trigger Level - 0 Timing L0 Fixed latency & Front - End Electronics 1 TB/s  4.0 s Fast 40-100 kHz L1 Level - 1 Control Level 1 LAN Trigger 1 MHz - Front End Multiplexers (FEM) Variable latency <2 ms 6-15 GB/s Front End Links RU RU RU Read - out units (RU) Throttle 6-15 GB/s Read - out Network (RN) Variable latency 50 MB/s L3 ~200 ms SFC SFC Sub - Farm Controllers (SFC) Control L2 ~10 ms & CPU CPU Storage Monitoring Trigger Level 2 & 3 CPU CPU Event Filter

  3. Functional Requirements • Transfer the physics data from the output of the Level-1 Electronics to the the CPU farm for analysis and later to permanent storage • Dead-time free operation within the design parameters • Reliable and ‘error-free’, or at least error-detecting • Provide Timing information and distribute trigger decisions • Provide monitoring information to the controls and monitoring system • Support independent operation of sub-parts of the system (partitioning)

  4. Performance Requirements LHCb in Numbers LHCb DAQ in Numbers The System will be designed against the nominal Level-1 trigger rate of 40 kHz, with a possible upgrade path to a Level-1 trigger rate of 100 kHz. Lead-time ~6-12 months  Scalability

  5. General Design Criteria • Uniformity • As much commonality as possible among sub-systems and sub-detectors • Reduced implementation effort • Reduced maintenance effort (bug fixed once is fixed for all) • Reduced cost • Simplicity • Keep individual components as simple as possible in functionality • Minimize probability of component failure • Important for large numbers • Keep protocols as simple as possible to maximize reliability • Strict separation of controls and data paths throughout the system Possibly at the cost of increased performance requirements in certain areas

  6. Specific Choices (1) • Only point-to-point links, no shared buses across modules… • For the physics data obvious • For controls desirable • Clear separation between data path and control path • Link and Network Technology • (optical) Gb Ethernet as uniform technology from the output of the Level-1 electronics to the input to the SFC, because of its (expected) abundance and longevity (15+ years) • Readout Protocol • Pure push-trough protocol throughout the system, i.e. every source of data sends them on as soon as available • Only raw Ethernet frames, no higher-level network protocol (IP) • No vertical nor horizontal communications, besides data (->Throttle mechanism for flow control)

  7. Specific Choices (2) • Integrated Experiment Control System (ECS) • Same tools and mechanisms for detector and dataflow controls • Preserving operational independence • Crates and Boards • The DAQ components will be housed in standard LHCb crates (stripped-down VME crates) • The Components will be implemented on standard LHCb boards (9Ux400mm VME-like, without VME slave interface)

  8. Constraints and Environment • The DAQ system will be located at Point 8 of the LHC • Some equipment will be located underground… • all the Level-1 electronics • FEM/RU? • (parts) of readout network? • …and some on the surface • (parts) of the readout network • SFCs • CPU-farm • Computing infrastructure (CPUs, Disks, etc…) • Control Room Consoles etc. • No DAQ Equipment will be located in radiation areas • Issues • Cooling/Ventilation • Floor-space Optical GbEthernet allows free distribution

  9. Summary • Design criteria • Simplicity, Commonality, Uniformity • Potentially with higher cost in certain areas • Lot of advantages in operation of the system • Designed around Gb Ethernet as basic link technology throughout the system (except individual farm nodes) • Pure push protocol without higher network protocol • No shared buses for neither data nor controls • Controls and data paths are separated throughout the system

More Related