1 / 14

Retraining Kaon Neural Net

Retraining Kaon Neural Net. Kalanand Mishra University of Cincinnati. Motivation. This exercise is aimed at improving the performance of KNN selectors. Kaon PID control samples are obtained from D* decay: D* + D 0 [K -  + ]  s +

mnaomi
Download Presentation

Retraining Kaon Neural Net

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Retraining Kaon Neural Net Kalanand Mishra University of Cincinnati

  2. Motivation • This exercise is aimed at improving the performance of KNN selectors. • Kaon PID control samples are obtained from D* decay:D*+D0 [K-+]s+ Track selection and cuts used to obtain the control sample is described in detail in BAD 1056 ( author : Sheila Mclachlin ). • The original Kaon neural net (KNN) training was done by Giampiero Mancinelli & Stephen Sekula in circa 2000, analysis 3, using MC events ( they didn’t use PID control sample). They used 4 neural net input variables: likelihoods from SVT, DCH, DRC (global) and K momentum. • I intend to use two additional input variables: track based DRC likelihood and polar angle () of kaon track. • I have started the training with PID control sample (Run 4). I will repeat the same exercise for MC sample and also truth-matched MC events. • Due to higher statistics and better resolution in the control sample available now, I started with a purer sample ( by applying tighter cuts). • Many thanks to Kevin Flood and Giampiero Mancinelli for helping me getting started and explaining the steps involved.

  3. K-π+ invariant mass in control sample P* > 1.5 GeV/c Purity within 1 = 97 % No P* cut Purity within 1 = 96 % Conclusion : P* cut improves signal purity. We will go ahead with this cut. Other cuts: K-π+ vertex prob > 0.01 and require DIRC acceptance.

  4. | m D* - mD0 | distribution in control sample Conclusion : P* cut doesn’t affect ∆m resolution.

  5. Momentum and cosdistributions Kaon P Very similar distributions for K and π Pion P Almost identical dist. for K and π Kaon cos Pion cos

  6. Plab vs cos distribution Kaon Pion Conclusion : Almost identical distributions for Kaon and Pion except on the vertical left edge where soft pions make slightly fuzzy boundary.

  7. Purity as a function of Kaon momentum Purity = 97 % Purity = 98 % Purity = 93 % Purity = 98 % Purity = 98 % Purity = 98 %

  8. NN input variables Inputs vars are: P, , svt-lh, dch-lh, glb-lh, trk-lh. scaled Pscaled scaled not a input var SVT lh

  9. NN input variables Inputs vars are: P, , svt-lh, dch-lh, glb-lh, trk-lh. DCH lh DRC-glb lh DRC-trk lh

  10. A sample of 120,000 events with inputs : svt-lh, dch-lh, glb-lh, trk-lh, P and  NN output at optimal point

  11. A sample of 120,000 events with inputs : svt-lh, dch-lh, glb-lh, trk-lh, P and  Signal performance

  12. A sample of 120,000 events with inputs : svt-lh, dch-lh, glb-lh, trk-lh, P and  Background performance

  13. A sample of 120,000 events with inputs : svt-lh, dch-lh, glb-lh, trk-lh, P and  Performance vs number of hidden nodes Saturates at around 18

  14. Summary • I have set up the machinery and started training K neural net. • One way to proceed is to include P and  as input variables after flattening the sample in P -  plane ( to get rid of the in-built kinematic bias spread across this plane). • The other way is to do training in bins of P and cos. This approach seems more robust but comes at the cost of more overheads and requires more time and effort. Also, this approach may or may not have performance advantage over the first approach. • By analyzing the performance of neural net over a sample using both of these approaches, we will decide which way to go. • The performance of the neural net will be analyzed in terms of kaon efficiency vs. pion rejection [ and also kaon eff vs. pion rej as a function of both momentum and  ]. • Stay tuned !

More Related