1 / 12

Analysis efficiency

Analysis efficiency. Andrei Gheata ALICE offline week 03 October 2012. Sources for improving analysis efficiency. The analysis flow involves mixed processing phases per event Reading event data from disk – sequential (!) De-serializing the event object hierarchy – sequential (!)

gratia
Download Presentation

Analysis efficiency

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Analysis efficiency Andrei Gheata ALICE offline week 03 October 2012

  2. Sources for improving analysis efficiency • The analysis flow involves mixed processing phases per event • Reading event data from disk – sequential (!) • De-serializing the event object hierarchy – sequential (!) • Processing the event - parallelizable • Cleaning the event structures - sequential • Writing the output – sequential but parallelizable • Merging the outputs – sequential but parallelizable • The efficiency of the analysis job: • job_eff = (tds+tproc+tcl)/ttotal • analysis_eff = tproc / ttotal • Time/event for different phases depending on many factors • Tread~ IOPS*event_size/read_throughput – to be minimized • Minimize event size, keep under control read throughput • Tds+Tcl~ event_size*n_branches – to be minimized • Minimize event size and complexity • Tproc= ∑wagonsTi – to be maximized • Maximize number of wagons and useful processing • Twrite= output_size/write_throughput – to be minimized Event #p Event #0 Event #1 Event #2 Event #0 Event #m Event #2 Event #n Event #0 Event #1 Event #2 Event #1 tread • tds • tproc • tcl • twrite tmerge

  3. Monitoring analysis efficiency • Instrumentation at the level of TAlienFile and AliAnalysisManager • Collecting timing, data size transfers, efficiency for different stages • Correlated with site, SE, LFN, PFN • Collection of data per subjob, remote or local • mgr->SetFileInfoLog(“fileinfo.log”); • Already in action for LEGO trains

  4. Monitored analysis info ################################################################# pfn /11/60343/578c4420-6178-11e1-9cd1-00266cfd8b68#AliAOD.root url root://xrootd3.farm.particle.cz:1094//11/60343/578c4420-6178-11e1-9cd1-00266cfd8b68#AliAOD.root se ALICE::Prague::SE image 1 nreplicas 0 openstamp 1348559810 opentime 0.701 runtime 2836.503 filesize 668.547 readsize 671.568 throughput 0.237 ################################################################# pfn /13/34934/2ed51c74-618b-11e1-a1cc-63e6dd7c661e#AliAOD.root url root://xrootd3.farm.particle.cz:1094//13/34934/2ed51c74-618b-11e1-a1cc-63e6dd7c661e#AliAOD.root se ALICE::Prague::SE image 1 nreplicas 0 openstamp 1348562662 opentime 1.484 runtime 3890.802 filesize 642.818 readsize 640.459 throughput 0.165 #summary######################################################### train_name train root_time 36865.630 root_cpu 2564.810 init_time 74.647 io_mng_time 34510.137 exec_time 2280.846 alien_site CERN host_name lxbse13c04.cern.ch Processed input files Analysis info

  5. Throughput plots • A simple and intuitive way to present the results • Will allow diagnosing both the infrastructure & the analysis Time [sec] Throughput [MB/sec] PFN1 PFN2 PFN3 PFN4 PFN5 execution I/O Initialization

  6. Few numbers for an empty analysis L= number of concurrent processes running on the disk storage server I/O latency is a killer for events with many branches De-serialization is determinant for locally available data – it depends on the size, but ALSO on the complexity (number of branches)

  7. The source of problems • Highly fragmented buffer queries over high latency network • Big number of buffers retrieved sequentially • No asynchronous reading or prefetching enabled in xrootd or elsewhere • ROOT provides the mechanism to compact buffers and read them async: TTreeCache • Not used until now • Now added in AliAnalysisManager

  8. Reading improvement AOD • AOD PbPb, JINR::SE (RTT=65ms to CERN)

  9. Reading improvement AOD • AOD pp, LBL::SE (RTT=173ms to CERN)

  10. Reading improvement MC • ESD pp, CNAF::SE (RTT=20 ms to CERN) • ESD pp, CERN::EOS (RTT=0.3 ms)

  11. What to do to get it • For AOD or ESD data, nothing • Cache set by default to 100 MB, async read enabled • Size of cache can be tuned via: mgr->SetCacheSize(bytes) • For MC, the cache sizes for kinematics and TR will follow the manager setting • Don’t forget to use: mcHandler->SetPreReadMode(AliMCEventHandler::kLmPreRead)

  12. To do’s • Feed analysis info to alimonitor DB • Provide info in real time about analysis efficiency and status of data flows • Point out site configuration and dispatching problems • TTreePerfStats-based analysis • Check how our data structures perform and pin down eventual problems

More Related