1 / 9

work in progress

work in progress. Philippe & Rene. Memory Pools when reading. main point is to reduce memory fragmentation. Currently we new/delete zipped buffer (when no cache) unzipped buffer branches target objects The idea is to create a permanent pool for the unzipped buffer

ranee
Download Presentation

work in progress

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. work in progress Philippe & Rene

  2. Memory Pools when reading main point is to reduce memory fragmentation • Currently we new/delete • zipped buffer (when no cache) • unzipped buffer • branches target objects • The idea is to create a permanent pool for • the unzipped buffer • the target object in case the branch has ownership

  3. Parallel buffer merge • The main bottleneck in parallel applications is the merging of the IO (buffers or files). For example a system with 100 workers can produce 1 Gbyte of data in one minute, then it takes more than 1hour to merge the output files!! • The idea is to accumulate per worker large buffers (TreeCache around 20 Mbytes) and send these zipped buffers to one or more IO servers receiving these buffers asynchronously. • Once this is implemented, it could also be used to implement a fast file parallel merge on the network.

  4. TTree::GetRestOfEntry • In skimming applications, typically one reads one or a few branches to select events, then one wants to read the rest of the event (or a collection of branches). • The idea is to sort the buffers for the remaining branches (like the TreeCache is doing) with a MergeRequest. • This requires a change in TTree::GetEntry to preload the baskets and a new function like TTree::GetEntry(Long64_t entry, TList *branches)

  5. TStreamerInfo::ReadBuffer • Two possible optimisations are under discussion: • TVirtualCollectionProxy::Atis an expensive non-inlined virtual function called for each member of a collection. Virtuality should be at the level of the collection, not at the member level. • The function is currently implemented as a gigantic switch/case within a loop. Explore replacement by pointers to functions and at the same time minimize the number of “ifs” within loops.

  6. TClass::AddRule • To simplify the interface for the auto schema evolution, we propose TClass::AddRule(classname,rule) • The rules could also be specified in a new file $ROOTSYS/etc/class.rules. • We have an immediate candidate for class.rules with the HepMC class HepEvent/GenVertex when calling TFile::MakeProjecton Atlas, CMS and LHCb files. • HepEventm_vertex options=“notOwner”

  7. TMemStat revisited • Anar working to get the new collection algorithm working on Linux and MAC (32 & 64 bits versions). • Once this is available (Anar at CERN next week) Rene will work on the visualisation/query system (a la TTreePerfStats) to understand memory leaks, fragmentation and abuse of new/delete (hence pools).

  8. GDML interface • Andrei has agreed to support the GDML interface (ROOT side).

  9. Graphics & GL • Timur will be at CERN for 3 weeks to work on the integration of his 3D algorithms in padgl and the GL viewer.

More Related