1 / 6

DQM and monitoring workshop

DQM and monitoring workshop. Mandate Goals. Mandate.

eoster
Download Presentation

DQM and monitoring workshop

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. DQM and monitoring workshop Mandate Goals

  2. Mandate • Assess what exists and what are the plans of each project for what concerns the ‘production’ of monitoring elements to be used to summarize the status and performance of each part of the detector, trigger and data flow for any analysis set quanta (lumi segment) • We want to know WHERE ( of all possible data access point) and HOW information is gathered/analyzed/stored. At this time we are not (yet) interested in WHAT is produced. • Need to cover both Event data quality monitoring and non-event data (specifically info like scalers and registers in FE, FEC, FED, TPG besides the obvious DCS data)

  3. Possible producers of Monitoring elements • Using the language of the CMS DQM-services, pieces of monitoring info (called Monitoring Elements which are at the end histograms) can be produced at many stages in the data flow of CMS. VME Crate contr. DQM producer Non-event data Front end VME spybuffer DQM producer Local data Global data streams (L1 sel) DQM producer FU/ HLT DQM producer Global data stream (HLT sel) DQM producer Stor, Man.

  4. Goals • Achieve a unified way to present data to shifters. Ideally one would like to have a hierarchy of info made available to the shifter to allow him to make a first judgment. For example: identify which subdetector has a problem,understand if the problem is a Data flow, trigger or DCS one. All of this starting from a unique graphic view of CMS and with tools to allow easy navigation to the layers with higher granularity • The (quasi) real-time analysis of the monitoring element information should also allow to get to a first definition of a run-quality file. A proposal would be that each project aims to summarize their understanding of the detector performance into a couple of percentages : • Percentage of detector (seen form the data point of view) which can be used for analysis • If applicable, percentage of the detector trigger system which was correctly working. • Reducing the bountiful of info to just two numbers might be a bit rough… but it can focus the ideas on what we want to achieve and it should be sufficient for the early analyses

  5. Goals ( more specifically) • By the end of today to agree ( or at least to set a clear set of action items) on • Where to collect the various pieces of Monitoring info and prepare the data to be used into automatic analysis.Hopefully the same place should be the one where access ( both for writing and reading) to the Online database as well as ORCON should be happening • An interactive graphic-view summarizing the status of all CMS and also one Individual detectors : this is meant to be a self-evident gateway, for the shifter, into the data monitoring world • TOOL ( note the singular !) to be used for presenting this info to the end user (meaning the SHIFTER: the experts will always have their tools… even though it would be nice if the expert levels would be integrated into the overall DQM structure rather than being based only on ad-hoc tasks) • In our opinion we must agree on ways to export (passively) in real-time the info regarding the data-quality to outside (meaning out of the private pit network)

  6. What next? • Implement action items identified today: we count on the collaboration of all DPG teams to adapt readily their current developments to these decisions: we see the global runs to come as times when we will actually verify how the implementation is advancing. • In due time scrutinize the info that each project provides to judge the quality of the data that the detector provides and make sure that if fulfills the needs of an efficient problem detection by the CMS operation team • One specific item which has not been included in this workshop and which is coming into focus is the luminosity measurement: it will be part of dedicated sessions in future meetings.

More Related