1 / 22

Outline what’s (not) new in vertexing status of simplecomposition

Vertexing and composition tools Wouter Hulsbergen University of Maryland On behalf of the vertexing ‘group’ (Michael Wilson and WH). Outline what’s (not) new in vertexing status of simplecomposition response to some things that came up yesterday. Vertexing tools.

dgodfrey
Download Presentation

Outline what’s (not) new in vertexing status of simplecomposition

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Vertexing and composition toolsWouter HulsbergenUniversity of MarylandOn behalf of the vertexing ‘group’ (Michael Wilson and WH) Outline what’s (not) new in vertexing status of simplecomposition response to some things that came up yesterday

  2. Vertexing tools No new developments in vertexing since summer 2004 Supported algorithms • Cascade, replaces GeoKin, fast, reliable • TreeFitter, not as fast as Cascade, but applicable to wider class of problems • Add4, simple p4 addition but with possibility to use mass constraints Somewhat supported algorithms • GammaConv, for γ->e+e-, not really a vertexer but it works. • GeoKin, still used everywhere in CompProdSequence, but you should not use in any longer • FastVtx, from the algorithmic point of view, this is the best there is. Unfortunately, some of it does not work (anymore). Needs maintenance.

  3. Cascade issues • the vertex initialization currently picks first two tracks (like GeoKin) • vertex result (slightly) depend on order of input • in combination with a bug in the BtaLoadCandidates code, this caused random perturbations in skim selections • Michael will find a better solution on a rainy Sunday afternoon • there is a hard limitation on the number of tracks (currently 6?) • limit can be made higher, but that will increase the size of the library (and make it compile slower) • there are some limitations with respect to the number of mass constraints on daughter particles: Michael in HN: “I think that your B fit is sufficiently complicated that you should use TreeFitter. There is a limit to the level of complexity I wanted in the Cascade fitter.” Sometimes you are better of using TF. In general, use Cascade when possible.

  4. TreeFitter issues • Code stable since a long time. (I ran out of ideas) • Algorithm documented as arXiv:physics/0503191 (to appear in NIM-A) • Open issues • sometimes fits fail so terribly that p4 is `time-like’ • results in a crash if you try to boost to particle rest-frame • just take care that you check the p4 • gamma-conversions don’t fit very well • correct implementation requires changing the parameterization of final state tracks. since this is the only situation in which this matters, I don’t want to do this • solution 1: use the Conversion-fitter ... not really a fit • solution 2: use FastVtx ... unfortunately that is broken in different ways if you use a gamma conversion in a decay tree, give it a mass constraint ! • there is an issue with reproducibility when reading back persisted candidates. This is not TreeFitter specific. It has to do with the fact that certain information about the fitting ‘history’ of a candidate is not fully persisted.

  5. Advertisement: The treefit::UpsilonFitter • To extract Δt, use the treefit::UpsilonFitter • Is just normal treefitter, but with a slightly extenced interface that helps you to extract lifetimes We can sort of `measure’ the lifetime-sum! Useful for EXTREME continuum rejection?

  6. Some confusion: Mass constraints with Add4? • There exists a ‘kinematic’ mode for Cascade and GeoKin: • without a “Geo” constraint, they operate just like Add4, but they can add mass constraints • The “Add4” algorithm now deals with mass constraints as well • it does so much more efficiently than GeoKin/Cascade • it does so at least as accurately as anything else • Therefore, don’t use GeoKin/Cascade for purely kinematic fits createsmpmaker MySequence MyList { [....] fittingAlgorithm set “GeoKin” fitConstraints set “Momentum” fitConstraints set “Mass” } createsmpmaker MySequence MyList { [....] fittingAlgorithm set “Add4” fitConstraints set “Mass” } When you need `vertexing’, you should of course still use Cascade createsmpmaker MySequence MyList { [....] fittingAlgorithm set “GeoKin” fitConstraints set “Geo” fitConstraints set “Mass” } createsmpmaker MySequence MyList { [....] fittingAlgorithm set “Cascade” fitConstraints set “Geo” fitConstraints set “Mass” }

  7. More confusion: the ‘primaryVertex’ constraint • The ‘primaryVertex’ constraint is not a constraint like the ‘Beam’ constraint is: • it just means that the primary vertex is used to initialize the position of the vertex • this in turn means that in for the calculation of photon momenta and track momenta, it uses the primary vertex as ‘origin’ • Why not a true constraint? The primary vertex was used in list definitions in two ways: • for initialization of the pi0/eta origin in decays to two photons a ‘constraint’ and ‘initialization’ are equivalent • for decay length / ‘cos(alpha)’ calculations of the Ks • in that case you really do not want to use the primary vertex, since it is most likely biased with the Ks tracks

  8. What has changed in SimpleComposition • The were three problematic issues with the pre-analysis-26 code • the combinatorics problem • the `mixed-input-lists’ problem • the CP-ambiguity problem • They might have looked like small issues but • prevented a real migration of the old composition sequence • somebody would have fallen into traps 2 or 3 in the long run • Unfortunately, the new combiner is complicated ... • found one bug after R16b skim: can loose the last candidate of a list with ‘multilevel’ decay trees and CP amibiguities • did not backport bug fix to analysis-26 to keep consistency with skim • pretty sure now that it did not affect R16b skims, so I will make an ‘extra-tag’ Migration of D and D* lists did not reveal any new bugs. • There is a completely new manual (`under construction’) http://www.slac.stanford.edu/BFROOT/www/Computing/Offline/AnalysisTools/SimpleComposition see backup slides

  9. Migration from CompositionTools to Smp • Let me remind you why SimpleComposition is `better’ • lists can be defined in tcl  no need to link in new modules • lists are only created on demand  lists take no time if not used (*) The fact that smp is used by many new people proves it success (**) • But do we really need a migration? • full migration will allow to run the physics sequence by default  people will be more inclined to use ‘default’ lists • list migration is a lot of work • automatic migration not possible • will introduce bugs (but will also remove some)  need QA • not every list can be migrated exactly the way it was • What is the status? • out of over 1200 lists in CompositionSequences, about 350 have been migrated or removed • only about half of the remaining list is part of the ‘Production’ sequence

  10. Migration continued • I will no longer work on migration, except maybe to get the B->charmonium K sequence done (it turns out the Ks->pi0pi0 is a bit of a problem) • Whoever picks this up, it might be worth to study first which parts of the production sequence are really used • don’t migrate what is not used, but just remove from ProdSequence • It makes a lot of sense to combine a migration with a ‘retuning’ • many lists are really ‘non-optimal’ --> retuning will help bringing skim rates down (example: the typical D* DeltaM window is 10 sigma) • this requires AWG input, however • we do not want people randomly picking lists for migration  this has resulted in too much inconsistency in the past • I propose to create an entity responsible for migrating lists on request

  11. (*) Time consumption of ‘idle’ lists • Smp lists are created on demand • list module puts a ‘list definition’ object in the event • this object creates the list when another piece of code asks for it • Step ‘1’ should not take any time, but it seems that it does • seems to take a fraction of a mili-second • no longer negligible if there are 100s of lists (like now) • Whether we can solve this problem depends on where the time is consumed ... I really don’t know this yet (but Frank W. volunteered to help figuring this out)

  12. (**) The use of default lists • The `simplicity’ of creating list with SimpleComposition also has a down side: People have started to implement their own ‘default’ lists: • At some point there were 5 identical D->Kpi lists around! • This was caused by the simple fact that we forced people not to rely on the production sequence, yet did not have an alternative • Re-implementing your private version of a default list is a really bad idea because it will make skimming slower • The solution is again: we need a mechanism for administering requests for list migration and a body that will deal with those requests

  13. SimpleComposition and UsrData SimpleComposition Homer Simpson for NOT President UsrData (at least not in the way it is used now)

  14. How it works The selections in SimpleComposition are all based on a single FP number talkto MyList { preFitSelector set “Mass pdg-0.1:pdg+0.1” } • For each of such ‘single number’ selections • a class is implemented, in this case the ‘SmpMassSelection’ • even without the actual calculation, this class is over a 100 lines of C++ code (not my design ) • a recipe is added to the ‘SmpSelectionFactory’ to create an object of this selection based on the tcl line above This is a working solution if the number of selections is small • As a `convenience feature’, the result of the calculation in the selection can be stored as UsrData talkto MyList { createUsrData set true } This is wonderful, but …

  15. Why not? What then? • a typical ntuple contains at least ~100 different variables • In that case the current solution is HORRIBLY inefficient both in terms of coding and in terms of CPU • Simply put, you do not want something like this • Instead, you want something like this • There exist at LEAST 2 implementations of this idea in Babar code talkto MyList { postFitSelectors set “cosThrust” postFitSelectors set “cosSpher” postFitSelectors set “LMom10” postFitSelectors set “LMom12” } • This requires 4 pieces of code to be implemented, which all look practically identical. A PC’s nightmare. • The thrust axis is calculated 4 times. talkto MyList { usrData set “ShapeVars: cosSpher cosThrust LMom10 LMom12” } name of ‘usrdata calculator’ variables it needs to store

  16. Things that came up in this workshop • forum for SimpleComposition? • There is a forum! (It is called “Vertexing and Composition Tools”) • new selectors in SimpleComposition • rather not. see the new manual • dalitz mass selections  I will extend the interface of the Mass selection • combinatorics in event reconstruction in ‘semi-excl.’ analyses • very time consuming • does smp such a bad job? (It is faster than anything we had before) • could make use of persisted composites • standard Ks list not pure enough • that is largely historical • in the 18 series, use KsTight (but not in 16!) • you can also clean up your Ks at ntuple level: learn how to extract a decay length talkto MyList { preFitSelector set “Mass d1+d2 0.2:0.4” } not a solution for UsrData!

  17. Cleaning up your Ks list • This was extensively discussed in this hypernews thread http://babar-hn.slac.stanford.edu:5090/HyperNews/RETIRED/get/AnalTools/566.html • There was also a talk in an collaboration meeting long ago http://www.slac.stanford.edu/BFROOT/www/Organization/CollabMtgs/2004/detMay04/Wed4i/hulsbergen.pdf • My favourite 3 selection variables are • chisquare of the Ks vertex • Ks decaylength ‘significance’ • chisquare of the Ks wrt its mother (which can be the beamspot) • If you use TreeFitter to fit the mother of the Ks, you will have this info all available (in principle)  I will add an example to the vertexing guide. • For Ks with a reasonable momentum, you should have practically no background. If your purity is worse than 80%, you are probably doing something wrong.

  18. Conclusions • There is not much happening in vertexing. It seems that we have everything we need • There are still developments in the composition tools • Smp is complete wrt core functionality for what it is intended to do • migration of production sequence has stalled • we need a different solution for usrdata (see next session) • I will not be in babar for ever ... • do we need a replacement, or • should we move responsibility to the analysis tools coordinator? • should we revive extensive vertexing QA, eg in the DQ group?

  19. Backup slides

  20. The CP-ambiguity problem • In Beta we call a candidate `CP-ambiguous’ if • mother is not self-conjugate • final state is self-conjugate For example, D0->pi+pi-, but not D0->K+pi-, or Ks->pi+pi- • Such candidates form a special challenge for the combiners • need to treat a D0->pi+pi-, both as D0->pi+pi- and as anti-D0->pi+pi- • need to adjust the particle type depending on the mother it worked in CompositionTools, but not in SimpleComposition • you would have to create both D0->pi+pi- and anti-D0->pi+pi-, then merge those list and use that as input for further particle building • Current solution does exactly what you would naively expect • if you create B0->Dcp pi0, you get only B0 (but you get them all) • if you create B+->Dcp pi+, you get both B0 and anti-B0 (but you get them all) Meanwhile this also works if the input D list is a mixture of ambiguous and non-ambiguous candidates

  21. The problem with mixed input lists • To make a list with daughters of the same type, but from different lists, some gymnastics was needed • This is no longer necessary. From now use # create first list talkto JPsiTightMuMu1 { decayMode set "J/psi -> mu+ mu-" daughterListNames set muNNTight daughterListNames set muNNLoose } # now merge talkto JPsiTightMuMu { inputListNames set JPsiTightMuMu1 inputListNames set JPsiTightMuMu2 } # create second list talkto JPsiTightMuMu2 { decayMode set "J/psi -> mu+ mu-" daughterListNames set muNNLoose daughterListNames set muNNTight } Works for Maker, Refiner and Sublister One limitation: use single list for particles of same type and charge talkto JPsiTightMuMu { decayMode set "J/psi -> mu+ mu-" daughterListNames set muNNTight daughterListNames set muNNLoose }

  22. The combinatorics problem • Suppose you want to create Ks->π+π- combination. Somehow you would have to take care that each combination appears only once, for example • But now, think about how you would solve this problem for D->Kpi, or if the tracks came from different lists, or if they were actually CP-ambiguous .. • A generic solution is non-trivial, so Chris chose something robust: • simply allow all possible combinations • compare each new combination with all ‘tried’ combinations and remove if duplicate This is VERY time-consuming in full events (CPU ~ list-length3) • The new code does simply not create duplicate combinations • It is probably similar to Gautier’s implementation in CompositionTools, but I didn’t check since that code is hard to read as well std::vector<ChargedTrack> tracks = ...; for(int i=0; i<tracks.size(); ++i) for(int j=0; j<i; ++j) if(tracks[i].charge()!=tracks[j].charge()) ....

More Related