1 / 15

AliRoot survey: QA

AliRoot survey: QA. P.Hristov 11/06/2013. Are you involved in analysis activities? ( 36 .1 % Yes, 63 .9 % No). Types of QA activities. Sources of QA information. Other: analysis tasks Lego train filtered trees. Is the QA information sufficient for your purposes?.

buck
Download Presentation

AliRoot survey: QA

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. AliRoot survey:QA P.Hristov 11/06/2013

  2. Are you involved in analysis activities?(36.1% Yes,63.9% No)

  3. Types of QA activities

  4. Sources of QA information Other: analysis tasks Lego train filtered trees

  5. Is the QA information sufficient for your purposes?

  6. Additional sources of QA information • OCDB, Logbook and monalisa page • for some runs we produce our own tree with full T0 ESD output and some information from other detectors and physics selection for local analysis • sometimes the logbook • Nevertheless A tool that checks the full run at a reconstruction level lower than the ESDs would be great. In the past I used my own code to check the raw data QA on the full run. • Often need to have access to other detectors QA information to understand spotted problems and/or calibration output information • the QA (ran during reco) output must be merged by run to be usefull

  7. Additional sources of QA information • LEGO train in some cases where the histograms in the running AN tag were not enough to assess the QA • Other sources I use: DA and DCS info - via shuttle (i.e. calibration type info) • However, it could be integrated with further checks • Some information is missing, which is provided only either at much lower level (logbook, database) or at analysis level • Lego train with the filtered data with enhanced rare samples • PHOS DQM is not perfect and not always indicate hardware problems during data taking. Should be improved from our side. • filtered (high pt) trees

  8. Is it easy to find and access the QA information you need?

  9. Comments to the access • Too many objects and lack of documentation • Many different sources • Often spread in various personal web-pages • Merging of raw QA still needed and not done • Missing one common database with QA information, trending and reference distribution period-based (eg. for PID) • A common place for ALL information (tracking, detectors, PID, ...) (best web interfact) is missing. Best with trending plots, single run distributions etc. Currently the information is spread: RCT (global), presentations, 'private' web pages, ... • There is no standard place where we can find standard QA plots, neither trend plots.No central publishing of standard QA

  10. Is the QA processing fast enough for your purpose? • Due to merging it takes long delays, sometime • Automatic part yes. Expert QA (within lego trains) not yet. Global tracking and PID missing, standard QA is detector oriented. Work in progress.

  11. Is the QA decision you take automated?

  12. Comments on the automatic QA • Automated trending but interpretation by person - however guided by graphs • Reference data • The code should be improved • The offline raw data QA was checked several times with my own code outside the frameworks. • partially because of the merging of raw QA. For ESD QA, it is now automated • It is done automatically in DQM but still manually for the QA output. The QA output is checked by automatic trending tools but the goodness of the results is always assessed by the analyzer. • Online it's automatic (threshold on data occupancy)

  13. Comments on automatic QA • For ESDs QA some human brain is needed to digest the trends. • Reference data, statistic values • Comparisons w.r.t reference • Reference Data. • Decision is based on the basis of trending variables. But fully automatic script not yet available. Work in progress (man power problem) • QA decision is subjective, it is a compromise between available statistics and data quality. External information from the logbook is often needed. • automatic decisions based on standardized trending/qa, manual checks still needed

  14. Could some (or all) of your QA be automated? • General answer: yes, but it needs work and might be difficult • More subtle issues could not be spotted by any automatic procedure.

  15. Among the QA procedures you are working on, which do already fit or can fit the online constraints? • General answer: one big part can work online, however some changes are needed • The DQM checks do already fit the online constraints • QA from RAW (currently basically only occupancy + readout checks) • More intelligent QA (at the cluster level) require the full reco online (like in the HLT) • For on-line constrains, something similar as the QA train needs to be ported on-line. RAW data DQM as Amore is not useful

More Related