1 / 8

Trigger Software Validation

Trigger Software Validation. Outline: Reminder of plans Status of infrastructure Planning meeting Other issues Conclusions. Olga Igonkina (U.Oregon), Ricardo Gon ç alo (RHUL) TAPM Open Meeting – April 12, 2007. Plans. Try to have as many automatic tests as possible

tabib
Download Presentation

Trigger Software Validation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Trigger Software Validation Outline: • Reminder of plans • Status of infrastructure • Planning meeting • Other issues • Conclusions Olga Igonkina (U.Oregon), Ricardo Gonçalo (RHUL) TAPM Open Meeting – April 12, 2007

  2. Plans • Try to have as many automatic tests as possible • Have enough information available to find and solve problems • Boring work can be automated to a large extent • Available tests • ANT: compares log files for a few events • RTT: compares histograms for ~1k events • Full-chain RTT: look for major errors in chain for ~200 events • http://indico.cern.ch/conferenceDisplay.py?confId=12855 • Need to address: • Algorithm validation: • Monitoring histograms: • time • cut variables • step counters in hypothesis • Integration tests: does it all work together? • TrigDecision • steering counters • trigger EDM objects • Reading/writing ESD/AOD/TAG/SAN TAPM Open Meeting

  3. Status of infrastructure • Basically well advanced. Thanks everyone for fast response! • See Tomasz’s talk in this meeting for up-to-date details and instructions • Histograms were added or are being added to: • TrigT2CaloJet/Tau/.. • TrigT2Tau • TrigTauHypo • TrigEgammaHypo • TrigJetHypo • TrigMuonHypo • ID tracking • B-physics algos • Cosmics • Etc • Announcement: please check status of monitoring histograms in your algorithm in new steering wiki: https://twiki.cern.ch/twiki/bin/view/Atlas/NewSteeringMigrationStatus TAPM Open Meeting

  4. Status of infrastructure • Monitoring infrastructure debugged also thanks to this exercise • Adds only small overheads to HLT algorithms • Allows different configuration for online Monitoring and offline Validation • Allows prescaled filling of histograms • Now creating RTT jobs • Will use same tests as for ANT • Run on more events • Produce histograms already added to algorithms • Expect to have some jobs running by next week TAPM Open Meeting

  5. Planning meeting • Meeting next week with people responsible for each slice to discuss: • Progress on adding monitoring histograms • Test code to read EDM objects and produce histograms • Other existing/possible monitoring tools • Existing and planned RTT/ANT tests • Display and analysis of test results • Data samples used in tests • Validation of Level 1 simulation code • Record of validation results for past releases • Dates now under discussion (choice seems tough..) • Please let me and Olya know of any other subject? TAPM Open Meeting

  6. Other issues • Changes in RTT test organisation: talk by Vivek Jain and Wouter Verkerke in SW week: http://indico.cern.ch/conferenceDisplay.py?confId=5060 • Jobs being classified into: Offline (Performance validation!), Development, and Online (for later) • Looking into ways of displaying and analysing test results and also at other tests (memory, size on disk, etc) • DQMF: talk by Michael Wilson in last SW week • Framework used for online monitoring • Seems very powerful and flexible • Not planning to use for now (“too much” for our current needs), • But plan to keep an eye on it and maybe use in the future • Analysis and Interpretation (AI): talk by Alex Undrus in last SW week • Python-based framework • Do post-processing of ANT, RTT, nightly builds,… • Check and validation tools are plugins • Analyse log files, histograms, pool files • Tools to measure memory, disk size, time: talk by S. Binet in last SW week TAPM Open Meeting

  7. Conclusions • Infrastructure to support validation well advanced (but more work to do) • Using common tools; good overlap with online Monitoring! • Looking at new tools and methods to display and analyse results TAPM Open Meeting

  8. columns for “Offline” “Development” “Online” color coded result for each tool link to Logfile or histo separate tables for OFF, DEV, ON classes AI • Plugins • Tools to analyse logfile in place • Several tools under development for histogram analysis • AI deploys tests and publishes results in web page: http://atlas-computing.web.cern.ch/atlas-computing/links/distDirectory/nightlies/trials/ai_www/ • See also wiki: https://twiki.cern.ch/twiki/bin/view/Atlas/AnalysisInterpretation TAPM Open Meeting

More Related