1 / 23

Update on Photons

Update on Photons. More on p 0 kinematic fit potential in hadronic events. Further H-matrix studies (with Eric Benavidez). Graham W. Wilson Univ. of Kansas. p 0 kinematic fit potential.

donoma
Download Presentation

Update on Photons

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Update on Photons • More on p0 kinematic fit potential in hadronic events. • Further H-matrix studies (with Eric Benavidez). Graham W. Wilson Univ. of Kansas

  2. p0 kinematic fit potential • See Vancouver talk re intrinsic p0 energy resolution improvement given correct pairing of well measured photons. • Today, characterize better the multi-photon issues in Z → uu, dd, ss events. • Define prompt photons as originating within 10 cm of the origin (NB differs from standard ct < 10 cm definition)

  3. On average 19.2 GeV (21.0%)

  4. On average, 2.1 GeV (2.3%)

  5. Intrinsic prompt photon combinatorial background in mgg distribution assuming perfect resolution, and requiring Eg > 1 GeV. With decent resolution, the combinatorics are not so horrendous … Especially if one adopts a strategy of finding the most energetic and/or symmetric DK ones first. Next step: play with some algorithms

  6. Conclusion on p0 kinematic fitting • Still very promising • Plan to work on developing algorithm for the photon-pairing problem • Non-prompt photons (K0S) are an important second order effect (certainly in s-sbar events !)

  7. H-matrix Next 3 slides are from Snowmass 05 (As a reference for “standard usage”)

  8. Standard Longitudinal HMatrix • Developed by Norman Graf. • Compare observed fractional energy deposition per layer with the average behavior of an ensemble of photons including correlations. • Current default implementation has a measurement vector with 31 variables: 30 fractional energies per layer and the logarithm of the energy. • Method: calculate, c2 = DT M-1 D where D is the difference vector, D = (xi – xave) (i=0, 30) and M is the covariance matrix of the 31 variables. • We were using FixedCone Clustering with q=60 mrad. • Used sidmay05 with low energy photons to avoid containment and issues regarding change in sampling (with 20+10 geometries).

  9. Hmatrix Performance These photons used for evaluating the expected fractions and the covariance matrix, M. 5 GeV photons, 900, sidmay05 20 GeV neutrons, 900, sidmay05 Not perfectly distributed …………….. but a lot of discrimination

  10. Hmatrix Performance 5 GeV photons, 900, sidmay05 20 GeV neutrons, 900, sidmay05 Eg. cut at p > 10-10 => eff (g) = 99.2%, eff (n) = 9.3% p > 10-5 => eff (g) = 98% , eff (n) = 4.6%

  11. Perceived limitations of standard method • Chi-squared probability distribution is not flat. • Matrix variables mix energy fractions with cluster energy • Can cause technical difficulties • Implicitly uses the overall energy in the cuts • Gives some scope for a “one-size fits all” solution – but unlikely to be the best possible solution. • Matrix averages over the conversion layer • Number of actual layers with significant energy deposits can be << nmax (=> c2 not correctly normalized)

  12. New Strategy • Use an H-matrix containing ONLY the cluster energy fractions per layer. • => cluster energy is something that can be used separately. • Use separate H-matrices depending on the layer with the first significant energy deposit. • Eg. for acme0605, we have H30, H29, H28, …. • This has the additional benefit that longitudinal changes in sampling fraction can be treated “seamlessly”. • => conversion point is something that can be added in afterwards as a further discriminant. • Disadvantage: need more MC statistics …

  13. More details • Apply a cut of 50 keV per cell. (MIP gives 124 keV in 320 mm Si). • Use number of layers with non-zero number of cells in normalizing the c2 . • In order to avoid photon fragments, have required clusters to have ncells > 5 and raw cluster energy > 0.03 GeV (cf. mean of 0.08 GeV for 5 GeV photons) 5 GeV photon 90° acme0605

  14. Cuts Require that the photon converts in the ECAL (r > 1260 mm) in the training samples (rejects conversions in the tracker) Interaction radius (mm)

  15. 5 GeV photons 90° acme0605 Resolution: (19.0  0.2%)/E

  16. Probability Distributions Flatter, but still spike at zero.

  17. Why is probability distribution not uniform ? (10 GeV photons, 90°, acme0605) layer0 layer 9 layer23 Response function is only Gaussian near shower max. Maybe a likelihood approach would have more potential ….

  18. 5 GeV Photon c2/dof = 37.1/23

  19. Performance • Currently battling floating point errors associated with neutron clusters which are not at all photon-like.

  20. 10 GeV neutron c2/dof = 1562/25

  21. 10 GeV photons and 10 GeV neutrons Photon purity (assuming ng=nn) Photon efficiency I suspect this is worse than actual performance due to FP issues

  22. Photon purity (assuming ng=nn) Photon efficiency

  23. Conclusions on H-matrix • H-matrix work still a work in progress, but new approach looks to be promising. • Probably should include some simple preselection cuts which discard really un-photon like events from the background samples. • Suggestions on what to use as a performance metric appreciated. • Interested in looking into likelihood approach in the future.

More Related