1 / 38

September 20, 2002 All DZero Meeting

TARC: Report from the Mini-Workshop. September 20, 2002 All DZero Meeting (Jianming Qian), Valentine Kouznetsov, Avto Karchilava, Rick Van Kooten, HTD. T racking A lgorithm R ecommendation C ommittee Charge. Collect information on performance of various tracking algorithms about

jetta
Download Presentation

September 20, 2002 All DZero Meeting

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. TARC: Report from the Mini-Workshop September 20, 2002 All DZero Meeting (Jianming Qian), Valentine Kouznetsov, Avto Karchilava, Rick Van Kooten, HTD

  2. Tracking Algorithm Recommendation Committee Charge • Collect information on performance of various tracking algorithms about • Efficiencies, fake rates, and misreco rates using standard procedures developed by the global tracking group and standard (both beam and MC generated) datasets • Reconstruction logistics such as CPU time per event, memory, and luminosity dependence • Input from physics/id/algorithm groups to be solicited

  3. Tracking Algorithm Recommendation Committee Charge • Make recommendation on how we should run tracking in p13 on the taking into account the farm resources available in October assumed to be • 25 Hz events • 58 seconds on 500 Mhz machine where 29 seconds is available for tracking • Note p13 is frozen October 1st, implying the TARC should move as quickly as is possible following this meeting

  4. Acknowledgements • A great deal of help and cooperation was available. • Special notes of thanks to: • The people who make SAM work including Lee Lueking • The D0 Data Farm Reco Group and Heidi Schellman and Mike Diesburg • Tracking Group esp. V. Kouznetsov • Mark Sosebee and the UTA farms • Mike Strauss • The physics, ID, and algorithm groups including the speakers from Wednesday’s workshop • The Tracking Algorithm Developers

  5. Contents of this Talk • Data Samples & Procedures • Definitions • Presentations from the Mini-workshop • Tracking Algorithms and performance report (S. Khanov) • Lifetime B-tagging (B. Wijngaarden) • Tau Reconstruction (S. Duensing) • B Hadron Reconstruction (V. Jain) • EM ID Issues (R. Zitoun) • Dimuon Studies (R. Hooper) • Higgs Gp. Report (L. Feligioni) • Secondary Vertex B-Tagging in Top Samples (A. Schwartzman) • Top Gp. Report (E. Chabalina) • Common Elements in their reports • Summary

  6. Data and Monte Carlo Samples • Data files in SAM (picked events have been merged) • Run 155554 – test run of 10,000 events • Run 157708 - 90,000 events TV7.31 w/ inst. lum ~ 5e30 • SMT grade C, CFT in the good run range with full stereo readout, prior to calor zero suppression change. • 38,000 dimuon events picked by B-physics Group for J/psi post full stereo readout • 16,000 mu+jet events picked by BID group • 5,400 picked high pT dimuons prior to full stereo readout • 6,800 picked high pT diem events post full stereo readout

  7. SAM Definitions Data • Run 155554:%reco_all_0000155554%tk-p11.11-%.root • Run 157708:%reco_all_0000157708%tk-p11.11-%.root • Mujets:%merge_mujet%tk-p11.11-%.root • J/Psi to dimuons:%dimuon_third_merged%tk-p11.11-%.root • Z to ee:%pick_diem%tk-p11.11-%.root • Z to mumu(not isolated):%pick_dimuon%tk-p11.11-%.root The third %’s above are gtr, htf, gtrela, htfela, gtrhtf, aa, aa_vtx, or trkall.

  8. Data and Monte Carlo Samples • Monte Carlo files in SAM • 5,000 Z to ee • B MC includes 8,000 Bs to Ds eX 8,000 B to J/psi(muons) Ks 5,600 Bs to Ds pi • 2*10,000 Top Group lepton + jets with average of 0.5 and 2.5 additional minbias • 10,000 bbH to bbbb Higgs events • 10,000 hadronic tau events • A light quark sample? • Not all same simulation used in generation

  9. SAM Definitions Monte Carlo • 5,000 Z to ee %z-ee%tk-p11.11-%.root • B MC %bbbarQQ%tk-p11.11-%.root • 2*10,000 Top Group lepton + jets with average of 0.5 and 2.5 additional minbias %ttbar-wjj+wlnu%tk-p11.11-%.root • 10,000 bbH to bbbb Higgs events %bbh-bbbb%tk-p11.11-%.root • 10,000 hadronic tau MC %tau_tauhcw%tk-p11.11-%.root • A light quark sample? The third %’s above are gtr, htf, gtrela, htfela, gtrhtf, aa, aa_vtx, or trkall.

  10. Links to Workshop Talks

  11. Track Finders • Experts may be willing to describe their algorithms in more detail. • We asked the algorithm developers to set their own parameters. Stuff on this and next few pages from S. Khanov’s talk at the mini-workshop.

  12. Reconstruction Procedure • All samples were reconstructed with p11.11 • Individual and Combinations gtr: (no H-disks in data, yes MC) htf: with grt refit (all but aa did that) gtrela: gtr (no overlap) + elastic on leftover hits htfela: htf + elastic on leftovers gtrhtf: OR of gtr (no overlap) + htf aa: aa (some samples had no vertex info, look for aa_vtx, also gtr-refit now available also aa tracks have wrong chi^2 and d0hitmask) Trkall (all 6) for cross checks • 15% failed to finish the two steps • Main problems are not thought to be the fault of the tracking algorithms

  13. Analysis Tools • “gtr_analyze” • Filled roottuples with reco tracks and their parameters • Fills info needed for comparison with MC • Some MC samples didn’t retain the MC hits so a hit-by-hit comparison could not be made. • “gtr_examine” (M.S.) • Root macros which calculate track efficiency and fake rates … • Tons of plots • Primarily for MC samples

  14. Definitions • Track Quality is described by a c2. • Good Tracks have matching c2 < 25 • Misreco Tracks 25 < c2 < 500 • Fake Tracks c2 > 500

  15. Tracking Algorithm CPU vs memory • Algorithms take similar amounts of time in tracking except AA, which is ~3 times faster. D0Reco time spec. ~ met. Improves possible. • No hard info on occupancy dependence. Top MC took longer than spec. for some combos. aa

  16. Results from Tracking Algo. Gp. (S. Khanov) • A lot of material was presented. • It was clear from the first that no algorithm or combination solved our problems. • Studies were shown of data and MC efficiencies and fake rates. • Number and distributions of tracks in eta and phi for all algorithms and combinations. • Z to dimuons, J/psi to dimuons, psi’, phi to KK bumps shown and fitted with background estimates. • Detailed comparison of Z to ee including a diagram comparing which Z’s were identified between three algorithms.

  17. 3) Z to ee overlaps Results from Tracking Algo. Gp. (S. Khanov) 1) Bumps 2) Split J/psi mass in Eta regions

  18. BID Group Results (Bram W.) • Studied B-tagging in Jets in Run 157708 and the mu-jet data • Number of tracks per jet and number of good tracks (positive DCA, pT>1.5 GeV/c) in jets with an 0.5 cone • “Efficiency” is fraction of b-jets (defined using pTrel mu-jet) tagged • “Mistag Rate” is fraction of jets in Run 157708 that are tagged. • Fleura studied top Monte Carlo • Found the tagging probabilities and mistag rates for 1- and 2- tagged jets for each algorithm • Presented a clear table.

  19. BID Group Results (Bram W.) • Example plot (one of several) Eff’y Mistag Rate.

  20. Tau ID group (Silke and Yuri) • Studied the hadronic tau MC (signal) and the 4b MC (backgd) • Noted tau ID is highly sensitive to the efficiency • Counted the number of 1 & 3-prongs • Looked at S(pT) of additional tracks in cones around the tau • Mass distributions of matched tracks (seemed independent of algorithm) • Studied tracking efficiency in 1-prong vs 3-prong vs pT • Mapped lost track eta-phi distribution

  21. Tau ID group (Silke and Yuri) • Left Plot shows the # of tracks in 1 & 3 prong tau events • Right plot shows the eff’y vs pT of the prongs for 3 prong taus.

  22. B Hadron Reconstruction (Vivek) • Analyzed the dimuon data sample to search for J/psi, Ks, and L. • Interested in low pT to reconstruct the pion in the lambda decay. • Showed the mass resolution, #signal and #background for the 6 cases. • Analyzed the three B MC samples: Bs to Ds*-p+, B0 to D*-p+, and Bs to Ds*-e+X. • Showed direct comparisons of the eff’y, widths, misreco rates in the samples.

  23. sz=7.6 mm sf=4.6 mrad sE/p = 0.18 P(c2) EM ID (Robert Z.) • Studied Z to ee data and MC and applied the track-matching used in obtaining the W and Z cross sections recently shown at ICHEP. Plots for p11.09 gtr Eff’y depends on c2 cut. Select p(c2)>1%.

  24. EM ID • Studied Z to ee data and MC and applied the track-matching used in obtaining the W and Z cross sections recently shown at ICHEP in the region |eta|< 0.8 • Studied eta dependence of the efficiencies in the Monte Carlo Efficiencies are per track

  25. % % % % EM ID • TARC Z events run with p11.11 • various algorithms • |h|<0.8

  26. New Phenomena and Muon ID (Ryan H.) • Concentrated on the large dimuon data sample • Compared the 6 cases against J/Psi, upsilon, and Z to dimuons • Number identified • Mass and width

  27. New Phenomena and Muon ID (Ryan H.)

  28. Higgs Group Report (Lorenzo F.) • Studied the bh to bbbb MC sample with all tracking algorithms and combinations • Tracking eff’y vs pT and eta • Misreconstruction and Fake rates vs pT and eta • DCA resolution for various pT min. • Eff’y and fake rate for Track reconstruction in jets • B-tagging eff’y and mis-tagging rate A lot of information!

  29. Higgs Group Report (Lorenzo F.)

  30. Higgs Group Report (Lorenzo F.)

  31. Higgs Group Report (Lorenzo F.)

  32. Higgs Group Report (Lorenzo F.)

  33. Secondary Vertex B-Tagging (Ariel) • Studied b-tagging in ttbar MC events for all 6 cases. • B-quark vs light quark tagging eff’y vs jet pT, eta, jet-track multiplicity, and jet multiplicity

  34. Secondary Vertex B-Tagging (Ariel) • Same plot, knee of curve

  35. Tracking in Top Samples (E. Chabalina) • Studied Tracking Eff’y, mis-reco rate, fake rate, and purity in top MC events using gtr_examine using all cases • 0.5 and 2.5 additional min-bias events overlaid • pT dependence, eta dependence, pT jet dependence … • Studied the Z to ee data • Numerically rated the 6 cases in tables of criteria!

  36. Tracking in Top Samples (E. Chabalina) 0.5 mb (~500 events) 2.5 mb (> 1000 events)

  37. Tracking in Top Samples (E. Chabalina)

  38. Summary • I described the data samples, procedure and summarized the mini-workshop • Common Elements in the presentations • No magic algorithm • For high pT physics the 3 combinations outperform any single one and aren’t strikingly different • For low-pT physics there was consensus that combinations including htf had best eff’y vs mistag fraction • This is a start but isn’t as good as we’d like to be!

More Related