1 / 9

UW Classification trees: first comparison with Atwood/Usher trees

UW Classification trees: first comparison with Atwood/Usher trees. Issue: how does the boosted tree scheme compare with Insightful Miner’s multiple tree? Previous comparison was with single trees only. Reminder: This code is all in GlastRelease, accessible to anyone.

cachez
Download Presentation

UW Classification trees: first comparison with Atwood/Usher trees

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. UW Classification trees: first comparison with Atwood/Usher trees • Issue: how does the boosted tree scheme compare with Insightful Miner’s multiple tree? • Previous comparison was with single trees only. • Reminder: • This code is all in GlastRelease, accessible to anyone T. Burnett

  2. The new Atwood/Usher variables • Fantastic job by Tracy to make these available: • The names are: • CTBbestEneProb CTBprofileProbCTBlastLayerPrCTBtrackerProbCTBparamProbCTBBestEnergyCTBdeltaEoECTBVTXCTBCORECTBGAM • A little niggle: • Can someone tell me the rule on how to capitalize these guys?  • When “energy” is “ene”? • When “prob” is “pr”? T. Burnett

  3. CTBbestEneProb • Tracy recommends 0.1 as a cut on this. For the 10M GR v7r3p1 run, here is the distribution, cut, and corresponding acceptance: ~4 m2 Sr Note: requires onboard filter cut, at least one track T. Burnett

  4. The corresponding E/E distributions for the standard 8 energy bins Note: logarithmic scales! T. Burnett

  5. And, with a 0.4 cut: T. Burnett

  6. The UW version • UW variable: CTgoodCal. • Trained on even events inallGamma-GR-v7r2-merit-TKR-prune.root • 7 different trees, all boosted 10 times: • param: all energies, or separately low, med, high • profile • tracker • lastlayer • Best one seems to be the combination of low, med, high of param • Results are shown for performance on the 10 M runallGamma-v7r3p1-10M-merit.rootallGamma-v7r3p1-10M-merit_1.rootallGamma-v7r3p1-10M-merit_2.root T. Burnett

  7. CTgoodCal results:cut at 0.5 T. Burnett

  8. The corresponding E/E distributions for the standard 8 energy bins Note: logarithmic scales! T. Burnett

  9. Status • Boosted trees seem to be at least equivalent to IM multiple trees • Training can be done by anyone • Next week: compare the following: • CTBVTX with CTvertex • CTBCORE with CTgoodPsf: how do our standard  and  compare? (See Jim’s fits) • Following week • CTBGAM vs. CTgamma T. Burnett

More Related