1 / 32

M. Richter, D. R ö rich, S. Bablok (IFT, University of Bergen) P.T. Hille (University of Oslo)

Status of implementation of Detector Algorithms in the HLT framework Calibration Session – OFFLINE week (16-03-2007). M. Richter, D. R ö rich, S. Bablok (IFT, University of Bergen) P.T. Hille (University of Oslo) M. Ploskon (IKF, University of Frankfurt)

abena
Download Presentation

M. Richter, D. R ö rich, S. Bablok (IFT, University of Bergen) P.T. Hille (University of Oslo)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Status of implementation of Detector Algorithms in the HLT frameworkCalibration Session – OFFLINE week(16-03-2007) M. Richter, D. Rörich, S. Bablok(IFT, University of Bergen) P.T. Hille(University of Oslo) M. Ploskon(IKF, University of Frankfurt) S. Popescu, V. Lindenstruth(KIP, University of Heidelberg) Indranil Das (Saha Institute of Nuclear Physics)

  2. TOC • HLT functionality • HLT interfaces • HLT  DCS • HLT  OFFLINE • HLT interface to AliEve • Synchronisation via ECS • Status of Detector Algorithms • generell remarks • TPC • TRD • PHOS • DiMuon

  3. HLT functionality • Trigger • Accept/reject events • verify dielectron candidates • sharpen dimuon transverse momentum cut • identify jets • ... • Select • Select regions of interest within an event • remove pile-up in p-p • filter out low momentum tracks • ... • Compress • Reduce the amount of data required to encode the event as far as possible without loosing physics information

  4. HLT interfaces • ECS: • Controls the HLT via well defined states (SMI) • Provides general experiment settings (type of collision, run number, …) • DCS: • Provides HLT with current Detector parameters (voltages, temperature, …) Pendolino • Provides DCS processed data from HLT (TPC drift velocity, …) FED-portal (Front-End-Device portal) • OFFLINE: • Interface to fetch data from the OCDB  TAXI, (OFFLINE  HLT) • Provides OFFLINE with calculated calibration data  Shuttle-portal, (HLT  OFFLINE) • HOMER: • HLT interface to AliEve for online event monitoring

  5. intern extern Detector responsibility Framework components Interface DCS Data flow in HLT Archive DB PVSS Detector data HLT cluster data ECS FED portal Pendolino DIM- Subscriber Pendolino-portal HLT DA_HCDB Homer spec Datasink (Subscriber) PubSub Ali Eve OFFLINE DA DA MySQL FES OCDB (Conditions) Shuttle AliRoot CDB access Task Manager DA_HCDB ECS-proxy HCDB (local Cache) Taxi-HCDB Taxi AliEn

  6. implicit transtion OFF INITIALIZE Synchronisation via ECS Distribution of current version of HCDB to DA nodes (DA_HCDB) INITIALIZING<< intermediate state>> DEINITIALIZING<< intermediate state>> implicit transtion INITIALIZED CONFIGURE + params ECS interface SHUTDOWN CONFIGURING<< intermediate state>> RESET implicit transtion CONFIGURED Pendolino fetches data from DCS archive DB and stores data to DA_HCDB ENGANGE implicit transtion DISENGAGING<< intermediate state>> ENGAGING<< intermediate state>> Offline Shuttle can fetch data DISENGAGE READY implicit transtion implicit transtion Filling of FileExchange Server (FES) and MySQL DB DAs request their DA_HCDB START COMPLETING RUNNING STOP

  7. HLT  DCS interface • FED portal: • Dim Channels (Services published on the HLT side) • implementing partially the FEDApi • Subscriber component of the HLT framework • PVSS panels on DCS side integrate data in DCS system • storing of DCS related data in the DCS Archive DB (HLT-cluster monitoring [HLT]; TPC drift velocity, …[detector specific]) • Pendolino: • contacts the DCS Amanda server (DCS Archive DB) • fetches current running conditions (temperature, voltages, …) • feeds content into DA_HCDB • requests in regular time intervals: • three Pendolinos, each with a different frequency (fast, medium, slow)

  8. Detector responsibility Framework components Interface HLT  DCS interface portal-dcs (Dim Subscriber) Interface (PubSub –FED API [DIM]) DA DA FED API DA_HCDB AliRoot CDB access classes DCS Archive DB PVSS triggers sync Pendolino file catalogue SysMes Pendolino (incl. detector preproc) Interface to SysMes portal-dcs (dcs-vobox)

  9. HLT  DCS interface • Pendolino details: • Three different frequencies: • fast Pendolino: 10 sec - 1 min • normal Pendolino: 1 min - 5 min • slow Pendolino: over 5 min • Response time: • ~13000 values per second • Remark: • The requested values can be up to 2 min old. (This is the time, that can pass until the data is shifted from the DCS PVSS to the DCS Archive DB)

  10. HLT  OFFLINE interface • Taxi portal • Requests OCDB and caches content locally (HCDB) • Provides calibration objects to Detector Algorithms (DA) inside the HLT • copied locally to DA nodes before each run (DA_HCDB) • DA_HCDB accessible via AliRoot CDB access classes • Shuttle portal • Collects calibration data from Detector Algorithms • Provides the data to OFFLINE, fetched after each run by Offline Shuttle • Data exchanged via FileExchangeServer (FES) • Meta data stored in MySQL DB

  11. HLT  OFFLINE interface DA DA AliRoot CDB access classes AliRoot CDB access classes DA DA ECS DA_HCDB DA_HCDB TaskManager ECS-proxy • Taxi portal: SysMes triggers update current run number HCDB0 portal-taxi0 (vobox-taxi0) 1. Taxi0 OFFLINE OCDB

  12. HLT  OFFLINE interface DA DA DA DA ECS notifies: “collecting finished” TaskManager portal-shuttle0 (Subscriber) ECS-proxy • Shuttle portal: FES MySQL Shuttle OFFLINE OCDB

  13. AliEVE HLT event display (example TPC) • existing infrastructure (M. Tadel)adopted to HLT with minimaleffort • connect to HLT from anywherewithin GPN • ONE monitoring infrastructurefor all detectors • using HOMER data transportabstraction

  14. Data Input Data Output Status Detector Algorithms(general remark) • HLT provides service and infrastructure to run Detector Algorithms, e.g. reconstruction and calibration algorithms • offline code can be run on the HLT, if it fulfills the requirements, given by the constraints due to: • limited accessibility of (global) AliRoot data structures • processing of each event is distributed over many nodes • none of the nodes have the full event data of all stages available • Detector algorithm interfaces via a processing component to the HLT data chain • Processing component implements the HLT component interface • General principle: Only the input, DCS- and calibration data is available for processing HLT

  15. Status Detector Algorithms (general remark) Online Offline data from DAQ AliHLTReconstructor offline source interface Completely identical HLT processing can run both in the online and offline framework dedicated data structures shipped between components, can be ROOT Tobjects DA's must work entirely on incoming data dedicated publisher components for special data are possible HLT produces ESD files, filled with the data it can reconstruct/provide RORC publishers HLT chain HLT Processors/ Detector Algorithms offline sink interface HLT out data to DAQ AliHLTReconstructor

  16. Status Detector Algorithms(general remark) • no access to (global) AliRoot data structures • DA's have no AliRunLoader instance • DA's run as separated processes, no data exchange via global variables • DA's can only work on incoming data and OCDB data • proper data transport hierarchy deployed by DA, i.e. access to whatever data through global methods/objects from lower hierarchies is penalty code • structures/objects for data exchange have to be optimized • TObjects for data transport must declare pointer-type members as transient members (//!), initialization properly handled by the constructor • in principle all offline code using the AliReconstructor plugin scheme can run on HLT,if a proper data transport hierarchy is deployed

  17. Status Detector Algorithms(TPC status) Status: • full TPC reconstruction running in HLT • output in ESD format • TPC calibration tasks defined by the TPC group • TPC group decided to extensively use HLT's computing capabilities for calibration task • several prototype DA's developed • Commissioning of calibration algorithms starts soon

  18. Status Detector Algorithms(TPC task list) • HLT On-line monitoring for TPC • Calibration : • 1-d histograms for pedestal runs and noise calibration • 1-d histograms for pad by pad calibration (time offset, gain and width of the time response function) for the pulser run and during the normal data taking • 1-d histograms for the gain calibration during the Krypton run, cosmic, laser and data taking • TPC drift velocity • Data Quality Monitoring (DQM) • Online monitoring: • 3d reconstructed track view optionally together with the 3d detector geometry inside • Drift velocity monitoring • Pad by pad signal • Charge per reconstructed track monitoring

  19. Status Detector Algorithms(TRD status) • Clusterization algorithm • Ready and working • Uses directly Offline clusterizer • Stand alone tracker • Almost ready (ready within next 1-2 weeks) • HLT Component implemented • Still few fixes within the AliRoot TRD offline code to be done – HLT will run 100% Offline code here too • PID component • Pending – offline code under finalization stage – again, no change of the Offline algorithms within HLT • Triggering scenarios under consideration

  20. Status Detector Algorithms(TRD status) • Calibration • Native AliRoot OCDB calibration data access (provided via HLT TAXI) • Production of reference data for calibration algorithms • Ready and working • Uses directly offline code • Monitoring • Prototype ready • Integration into AliEve will follow TRD Clusters reconstructed on HLT

  21. Status Detector Algorithms(TRD status) • Calibration: • Histogram production ready&working (mcm tracklet based) • Each HLT component has an OCDB access (just like in Offline) via local (HLT node) storage – TRD chain is using OCDB data (1:1 Offline AliRoot code) • TRD preprocessor handles calibration of calibration parameter from the input histograms collected on the HLT • TRD local reconstruction on HLT almost complete (local tracking still on the way...) • Calibration histograms are produced • First HLT monitoring code emerging soon (also AliEve support)

  22. Status Detector Algorithms(TRD task list) • Short term to do: • PID • Track merging with TPC (and ITS eventually) • Long term to do: • Physics trigger scenarios

  23. Status Detector Algorithms(PHOS status) • Current status • running full PHOS HLT chain (5 modules) with raw data simulated in aliroot • successful test on simulated HLT ”cluster” consisting of 3 laptops • fast and accurate online evaluation of cell energies • calibration data: Continious accumulation of per channel energy distribution: • Calibration data written to root files at end of run. • Histograms has been evaluated visually and looks reasonable. • raw data can be written to files untouched by the HLT (HLT mode A) • calibration data can be accumulated over several runs. • event display: Display of events & calibration data for 5 modules using HOMER • collection of data from several nodes to be vizualized in a single event display. • PHOS HLT Analysis chain has run successfully distributed over 21 nodes at the HLT cluster at P2

  24. Status Detector Algorithms(PHOS task list) • Currenly ongoing work: • Implementation of DQM • Integration of end of run calibration procedures, DA • Implementation of fast Phi0 invariant mas algorithm • Testing and benchmarking of the processing cain on the HLT cluster. • Preparations for PDC07 • Near future plans: • Integration ECS, DCS , shuttle etc.. • Testing of the HLT processing chain on beamtest data • Making of ESDs to be send to DAQ with HLT-out • Running of the PHOS HLT processing chain on data files and root files • Minor improvment on the online display • Finalization and documentation of internal PHOS HLT data format.

  25. Status Detector Algorithms(PHOS task list) • Currenly ongoing work: • Implementation of DQM • Integration of end of run calibration procedures, DA • Implementation of fast Phi0 invariant mas algorithm • Testing and benchmarking of the processing cain on the HLT cluster. • Preparations for PDC07 • Near future plans: • Integration ECS, DCS , shuttle etc.. • Testing of the HLT processing chain on beamtest data • Making of ESDs to be send to DAQ with HLT-out • Running of the PHOS HLT processing chain on data files and root files • Minor improvment on the online display • Finalization and documentation of internal PHOS HLT data format.

  26. Status Detector Algorithms(DiMuon status and task list) • Present Status: • Standalone hit reconstruction is ready and implemented in HLT environment of CERN PC farm • First results of resolution test with the rawdata generated using AliRoot of dHLT chain at CERN • Processing time for multiple event is large compared to standalone mode • Full dHLT Chain is working and up in UCT cluster • Future to do list: • Improvement of the processing timing • Integration of the tracker algorithm in CERN HLT. • Implementation of the full chain along with debugging and benchmarking. • Preparing the output in ESD format. • Efficiency checking of the dHLT chain using beamtest data

  27. Information on the web http://wiki.kip.uni-heidelberg.de/ti/HLT/index.php/ECS-interface http://wiki.kip.uni-heidelberg.de/ti/HLT/index.php/Specification-HLT2OFFLINE-interface http://wiki.kip.uni-heidelberg.de/ti/HLT/index.php/UseCase-Calibration-HLT http://wiki.kip.uni-heidelberg.de/ti/HLT/index.php/Data_path_from_DCS_to_the_HLT and talks of HLT session on the last Alice week

  28. Backup slides

  29. Status Detector Algorithms(TRD DA overview)

  30. Status Detector Algorithms(TRD status)

  31. Resolution of dHLT hitreconstruction Note : Resolution in the Y direction is far better than the X direction is due to the detector geometry, the minimum padsize in beding plane is ~0.5 cm, whereas in non-bending direction is ~0.71 cm.

  32. HLT ECS interface • State transition commands from ECS • INITIALIZE, CONFIGURE(+PARAMS), ENAGE,START,… • Mapping to TaskManager states • CONFIGURE parameters: • HLT_MODE: the mode, in which the HLT shall run (A, B or C) • BEAM_TYPE: (pp (proton-proton) or AA (heavy ion)) • RUN_NUMBER: the run number for the current run • DATA_FORMAT_VERSION: the expected output data format version • HLT_TRIGGER_CODE: ID defining the current HLT Trigger classes • CTP_TRIGGER_CLASS: the trigger classes in the Central Trigger Processor • HLT_IN_DDL_LIST: list of DDL cables on which the HLT can expect event data in the coming run. The structure will look like the following: <CableName>:<DetectorPart>,<CableName>:<DetectorPart>,... • HLT_OUT_DDL_LIST: list of DDLs, on which the HLT can send data to DAQ

More Related