1 / 9

DAQ thoughts about upgrade

DAQ thoughts about upgrade. 11/07/2012 Barthelemy.Von.Haller@cern.ch Sylvain.Chapeland@cern.ch. What we would like to see in a future framework (1). Unique database for online and offline configs and calib , include data valid for current run (for DA and DQM per instance)

benoit
Download Presentation

DAQ thoughts about upgrade

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. DAQ thoughts about upgrade 11/07/2012 Barthelemy.Von.Haller@cern.ch Sylvain.Chapeland@cern.ch

  2. What we would like to see in a future framework (1) • Unique database for online and offline configs and calib, include data valid for current run (for DA and DQM per instance) • Data traceability : calibration -> ocdb -> application (keep track of calibration and config used for run N) • Unify data format that could be used throughout systems (from LDC). If two format (like raw and ROOT, then transparent conversion from one to the other) • Review our policy on dropping incomplete events • Single lib/interface to read/write data (local, castor, tags, indexes)

  3. What we would like to see in a future framework (2) • Independent modules of reasonnable size • Make use of new available online CPU (for reco or analysis) • Dynamic adaptation of algorithms to available online resources (e.g. online reco on event building cpus for a fraction of events) • DQM-QA merge : unique system usable online and offline for all QA related tasks. • DQM : Reco online of events and storage of results for later. Completion of QA offline, but first X % online. • Multithreading

  4. Common Procedures (1) • Not necessarily common hardware or facilities ! • E.g. same build system know-how but different instances • Release numbering • Build system & Packaging • Tools • e.g. cmake, svn, rpm, yum, ticket system • Dependencies handling (aliroot version to use in prod/test) • Benchmark (ref sys and tools) • Mock modules

  5. Common Procedures (2) • Code access policy (open-source ? Commit rights ?) • Languages (C++, standard libraries STL/Boost, ROOT) • OS (Production farm and desktops ; SLC, Ubuntu, Mac OS, Windows, mobile) • HW (x86_64, GPUs, virtualisation) • Storage (cloud ?) • Core language features (e.g. log / printf / cout) • Coding guidelines and standards

  6. Current daq tools or features that could be used/replaced by a common framework • Data validity checks, e.g. what we have in the readout now • Infologger • Logbook (common source of information for online and offline) • FXS (Move data around at runtime between system) • Publish-Subscribe and notification mechanism (now DIM) • Process control • AMORE (online DQM allowing parallel execution of detector code, collection of results and visualization through custom, generic and web user interfaces)

  7. New DAQ • Replay and inject events at any level (testing, benchmarking, mock) • Started a prototype of a new DAQ system based on modular IO boxes • Few types of modules: event producer, consumer, filter • C++ base classes provide common needs: FIFOs, thread pools, process control • Inheritance + virtual methods to implement custom behavior of each module • Flexible description of modules interconnects to instantiate different DAQ systems

  8. Answers to Matthias’ questions • Max data recording : 4GB (4x10Gbits fibers to CASTOR) • Limiting factor : bandwidth to CASTOR (4 vs >8 GB locally) • Easy scale up of the system (more LDCs and GDCs) and hardware evolution • Other « DAQ » group services : ACT, ECS, FXS, Logbook, DQM, DA, ACR stations, InfoLogger, DIP

  9. Conclusion • We are looking for topics of reflexion and debate , discussions and definition of architecture and common processes, but not for direct technical solution or implementation. • The Panel should also define the processes of R&D for what will follow the Panel’s work and provide requirements.

More Related