1 / 34

The ROOT system Status & Roadmap

The ROOT system Status & Roadmap. NEC’2009 Varna, 8 September 2009 Ren é Brun/CERN. Project History : Politics. Staff consolidation. Project starts Jan 1995. Hoffman review. Objectivity, LHC++. Public presentation Nov 1995. LCG RTAGs SEAL,POOL,PI. LCG 2 main stream.

cara
Download Presentation

The ROOT system Status & Roadmap

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The ROOT systemStatus & Roadmap NEC’2009 Varna, 8 September 2009 René Brun/CERN Global Overview of ROOT system

  2. Project History : Politics Staff consolidation Project starts Jan 1995 Hoffman review Objectivity, LHC++ Public presentation Nov 1995 LCG RTAGs SEAL,POOL,PI LCG 2 main stream FNAL RHIC Sep 1998 ALICE Global Overview of ROOT system

  3. Major Technical Steps Reflex From SEAL to ROOT I/O based on dictionaries in memory P R O O F Graphics based on TVirtualX MathLibs GUI based on signal/slots RooFit TMVA Trees automatic split CINT GL/EVE CINT7 CMZ…………………………………CVS………………………………………………………….SVN…………… Global Overview of ROOT system

  4. Smooth and fast transition from CVS to SVN We are very happy with SVN From CVS to SVN Global Overview of ROOT system

  5. ROOT libs Granularity: More than 100 shared libs You load what you use root.exe links 6 shared libs (VM < 20 Mbytes) x x x x x x x

  6. CINT  LLVM CINT is the CORE of ROOT for Parsing code Interpreting code Storing the classes descriptions A new version of CINT (CINT7) is based on Reflex, but found too slow to go to production. We are considering an upgrade of CINT using LLVM (Apple-driven OS project). LLVM is a GCC compliant compiler with a parser and a Just In Time Compiler. CINT/LLVM (CLING) should be c++0x compliant

  7. Input/Output: Major Steps User written streamers filling TBuffer member-wise streaming for STL col<T*> streamers generated by rootcint generalized schema evolution automatic streamers from dictionary with StreamerInfos in self-describing files parallel merge member-wise streaming for TClonesArray Global Overview of ROOT system

  8. Input/Output: Object Streaming From single-object-wise streaming from hand-coded Streamer to rootcint-generated Streamer to generic Streamer based on dictionary To collections member-wise streaming Member-wise streaming saves space and time std::vector<T> ABCDABCDABCDABCDABCDABCD … ABCD std::vector<T*> ABCDABCDABCDABCDABCDABCD …ABCD TBuffer bufAll AAAAAA..A TBuffer bufA BBBBBB..B TBuffer bufB CCCCCC..C TBuffer bufC DDDDDD..D TBuffer bufD

  9. I/O and Trees from branches of basic types created by hand to branches automatically generated from very complex objects to branches automatically generated for complex polymorphic objects Support for object weak-references across branches (TRef) with load on demand Tree Friends TEntryList Automatic branch buffer size optimisation (5.26)

  10. New functions added at each new release. Always new requests for new styles, coordinate systems. ps,pdf,svg,gif, jpg,png,c,root, etc 2-D Graphics Move to GL ? Global Overview of ROOT system

  11. The Geometry Package TGEO The TGeo classes are now stable. Can work with different simulation engines (G3,G4,Fluka) (See Virtual Monte Carlo) G3->G4, G4->TGeo, TGeo>GDML Used in online systems and reconstruction programs Built-in facilities for alignment Impressive galery of experiments (35 detectors in $ROOTSYS/test/stressGeometry)

  12. Highly optimized GL views in TPad the GL viewer 3-D Graphics Global Overview of ROOT system

  13. Event Display: EVE EVE is a ROOT package (GL-based) for event displays. Developed in collaboration with Alice (AliEve) and CMS (FireWorks). Provides all the GUI widgets, browsers, GL infrastructure (far better than the old OpenInventor). Used now by many experiments (see eg FAIRROOT, ILCROOT) to display raw data, MC events or detector oriented visualization.

  14. GUI Many enhancements in the GUI classes: browser, html browser, TABs, EVE widgets. GUI builder with C++ code generator. Note that the code generator works from any existing widget (CTRL/S). class TRecorder can store and replay a GUI session: All mouse events Keyboard input, including macro execution QT interfaces: a big pain, difficult to maintain with the successive versions of Qt. CRTL/S

  15. GUI Examples

  16. GUI Examples II Can browse a ROOT file on the remote web server

  17. RooFit/ RooStats The original Babar RooFit package has been considerably extended by Wouter Verkerke. Now structured in RooFitCore and RooFit RooFit is the base for the new RooStats package developed by Atlas and CMS.

  18. PROOF PROOF – Parallel ROOT Facility • Parallel coordination of distributed ROOT sessions • Scalability: small serial overhead • Transparent: extension of the local shell • Multi-Process Parallelism • Easy adaptation to broad range of setups • Less requirements on user code • Process data where they are, if possible • Minimize data transfers • Event-level dynamic load balancing via a pull architecture • Minimize wasted cycles • Real-time feedback • Output snapshot sent back at tunable frequency • Automatic merging of results • Optimized version for multi-cores (PROOF-Lite)

  19. Interactive-batch Start a session, go into background mode and quit $ root -l root [0] p= TProof::Open("localhost") ... root [1] p->Process("tutorials/proof/ProofSimple.C”) root [2] .q Reconnect from any other place: if still running the dialog box will pop-up $ root -l root [0] p = TProof::Open("localhost") When finished, call Finalize() to execute TSelector::Terminate() root [1] p->ShowQueries() +++ +++ Queries processed during this session: selector: 1, draw: 0 +++ #:1 ref:"session-pcphsft64-1252061242-8874:q1" sel:ProofSimple completed evts:0- +++ root [2] p->Finalize()

  20. Event level TSelector framework Events Begin() • Create histos, ... • Define output list 1 2 Process() Output List Parallelizable event loop preselection n OK analysis last Same framework can be used for generic ideally parallel tasks, e.g. MC simulation Terminate() Final analysis, fitting, ...

  21. TSelector::Process() Read only the parts of the event relevant to the analysis ... ... // select event b_nlhk->GetEntry(entry); if (nlhk[ik] <= 0.1) return kFALSE; b_nlhpi->GetEntry(entry); if (nlhpi[ipi] <= 0.1) return kFALSE; b_ipis->GetEntry(entry); ipis--; if (nlhpi[ipis] <= 0.1) return kFALSE; b_njets->GetEntry(entry); if (njets < 1) return kFALSE; // selection made, now analyze event b_dm_d->GetEntry(entry); //read branch holding dm_d b_rpd0_t->GetEntry(entry); //read branch holding rpd0_t b_ptd0_d->GetEntry(entry); //read branch holding ptd0_d //fill some histograms hdmd->Fill(dm_d); h2->Fill(dm_d,rpd0_t/0.029979*1.8646/ptd0_d); ... ... see $ROOTSYS/tutorials/tree/h1analysis.cxx

  22. TSelector performance from “Profiling Post-Grid analysis”, A. Shibata, Erice, ACAT 2008

  23. PROOF installations ALICE ATLAS CERN Analysis Facility‏ • 112 cores, 35 TB • Target: 500 cores, 110 TB‏ • Prompt analysis of selected data, calibration, alignment, fast simulation • 5-10 concurrent users • ~80 users registered‏ GSIAnalysis Facility, Darmstadt‏ • 160 cores, 150 TB Lustre • Data analysis, TPC calibration • 5-10 users • Performance: 1.4 TB in 20 mins Other farms: JINR, Turin Wisconsin • 200 cores, 100 TB, RAID5 • Data analysis (Higgs searches) • I/O perfomance tests w/ multi-RAID • PROOF-Condor integration • ~20 registered users BNL • 112 cores, 50 TB HDD, 192 GB SSD • I/O perfomance tests with SSD, RAID • Tests of PROOF cluster federation • ~25 registered users Test farms at LMU, UA Madrid, UTA, Duke, Manchester

  24. PROOF: more installations • NAF: National Analysis Facility at DESY • ~900 cores shared w/ batch under SGE • ~80 TB Lustre, dCache • Data analysis for ATLAS, CMS, LHCb et ILC • PROOF tested by CMS groups • ~300 registered users • CC-IN2P3, Lyon • 160 cores, 17 TB HDD • LHC data analysis • Purdue University, West Lafayette, USA • 24 cores, dCache storage • CMS Muon reconstruction • ... G.Ganis, Fermes d'analyses basées sur PROOF

  25. PROOF-LITE • PROOF optimized for single many-core machines • Zero configuration setup • No config files and no daemons • Workers are processes and not threads for added robustness • Like PROOF it can exploit fast disks, SSD’s, lots of RAM, fast networks and fast CPU’s • Once your analysis runs on PROOF Lite it will also run on PROOF • Works with exactly the same user code as PROOF

  26. PROOF-LITE 1 core void go() { gROOT->ProcessLine(".L makeChain.C"); TChain *chain = makeChain(); chain->Process("TSelector_Ntuple_Zee.C+”); }

  27. PROOF-LITE 8 cores void go() { gROOT->ProcessLine(".L makeChain.C"); TChain *chain = makeChain(); TProof::Open(“”); chain->SetProof(); chain->Process("TSelector_Ntuple_Zee.C+”); } and these two statements will soon be done fully automatic

  28. The new http://root.cern.ch new The old web site http://root.cern.ch has been replaced by a better version with improved contents and navigation using the drupal system. old

  29. Supported Platforms Linux (RH, SLCx,Suse, Debian, Ubuntu) gcc3.4, gcc4.4 (32 and 64 bits) icc10.1 MAC (ppc, 10.4, 10.5, 10.6) gcc4.0.1, gcc4.4 icc10.1, icc11.0 Windows (XP, Vista) VC++7.1, VC++9 Cygwin gcc3.4, 4.3 Solaris + OpenSolaris CC5.2 gcc3.4, gcc4.4

  30. Robustness & QA Impressive test suite roottest run in the nightly builds (several hundred tests) Working on GUI test suite (based on Event Recorder)

  31. ROOT developersmore stability

  32. Summary After 15 years of development, good balance between consolidation and new developments. The ROOT main packages (I/O & Trees) are entering a consolidation, optimization phase. We would like to upgrade CINT with the LLVM C++0x compliant compiler (CLING). PROOF becoming main stream for LHC analysis Better documentation and User Support More stable manpower Usage rapidly expanding outside HEP

  33. Performance • ALICE ESD analysis • I/O limitations limits scalability inside a machine • ATLAS tests w/ SSD Rate (MB/s) Rate (MB/s) 1 worker per node n workers on 1 node # nodes # workers BNL PROOF Farm • 10 nodes / 80 cores • 2.0 GHz / 16GB RAM • 5 TB HDD /640 GB SSD‏ • ProofBench analysis CPU limited Courtesy of S. Panitkin, BNL

  34. Files: Local/Remote Local files From 32 to 64 bits pointers What about Lustre/ZFS TFile::Open(“mylocalfile.root”) Web files on a remote site using a standard Apache web server TFile::Open(“http://myserver.xx.yy/file.root”) Remote files served by xrootd Support for parallelism Support for intelligent read-ahead (via TTreeCache) Support for multi-threading At the heart of PROOF TFile::Open(“root://myxrootdserver.xx.yy/file.root”)

More Related