1 / 100

A direct search for K S  3 0

A direct search for K S  3 0. M. Martini & S. Miscetti. Blessing talk for final result. LNF 17 Feb 2005. Summary of the talk. DATA/MC samples - MC Calibration of linearity and energy response Counting of the normalization sample - Splash filter

krikor
Download Presentation

A direct search for K S  3 0

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A direct search forKS 3 0 M. Martini & S. Miscetti Blessing talk for final result LNF 17 Feb 2005

  2. Summary of the talk • DATA/MC samples • - MC Calibration of linearity and energy response • Counting of the normalization sample • - Splash filter • - DATA/MC comparison of photon multiplicity • - Systematic errors • Description of the signal search • - Kinematic fit • - Calibration of c22pand c23p • - Fit procedure for fake KL-crash • - Analysis chain • - Optimization procedure • - Systematic errors on background and signal • Upper limit calculation

  3. Samples used for the analysis • DATA sample: 450 pb-1 collected during 2001+2002 data taking. • Out of peak data excluded. • Background sample: 900 pb-1 of fKSKL decay (NeuKaon) and  200 nb-1 • of f all. • - In the following we refer as OLDMC/NEWMC to • the montecarlo production corresponding to tagged • version DBV-17/DBV18. • - In the NEWMC we can use also the KL-crash • tag directly. • - We have consistently calibrated the energy • response and resolution between data and • different MC productions. • Signal sample: is extracted by looking at the generation code from the latest • production of KS rare decays. We have used a sample • equivalent to the entire 2001 data taking period • (~125000 events).

  4. Calibration of linearity of energy response <E-Pfit> Pfit Data Pfit (MeV) NEWMC Pfit (MeV) • SAMPLE: Ks 2p0with 4 clusters in TW and a KL-crash • c2 fit < 30 • Fitting slices of 20 MeV in Pfit with a gaussian. • Barrel,Ecap1,Ecap2 • Drop of response of • 1.3-1.5% in MC 2004 • not observed on data • and OLDMC

  5. Calibration of resolution of energy response s(E-Pfit ) Pfit Data Pfit (MeV) NEWMC Pfit (MeV) Comparison of energy resolution between data and MC is instead satisfactory

  6. Control sample of Ks2p0 Pions masses and DE before and after calibration of energy response OLDMC 2002 2001

  7. Control sample of Ks 2p0 Pions masses and DE before and after calibration of energy response NEWMC 2002 2001

  8. Method of measurement (I) We use a method of measurement based on KL_crash tagging: • KL–crash tag with: • E > 100 MeV • b* [0.17, 0.28] • Normalize the rate of KS3p0 with KS  2p0 for a given KL-crash in order to minimize the systematic errors and tag-bias. • For a given KL-crash, we count prompt photons ( i.e. the photon multiplicity on the KS side), Ng, as follows: • TW of MIN(3.5 sT, 2ns) • Ecut = 7 MeV • |cos(q) | ≤ 0.915

  9. Method of measurement (II) • We then count the events with Ngbetween 3 and 5 for normalization. • In the 39538 events with Ng=6 we instead look for the signal. • The latest category is dominated by KS2p0+2 fake photons due to: • - accidental coincidence of machine background events • - shower fragmentation. • A not negligible contribution of fake KL-crash and KL 3p0 events • has to be simulated and taken into account by proper MC calibration. • The contamination due to a fake KL -crash and a decay chain different • from F  KSKL is instead negligible (120×2.5) and disappears • as soon as the first analysis cuts are applied.

  10. Method of measurement (III) • To consistently add all generated MC statistics we have defined the • KL crash in the MC as an event with a true KL decay vertex position • outside the DCH external boundaries. We refer to this sample • as a KL-far selected. Appropriate smearing to the generated direction • is applied. • Whenever this condition is not satisfied we look for fake KL-crash • events by searching clusters passing the KL-crash tag requirements. • We use the NEWMC to assign a systematic error to this artificial • subdivision of the MC sample. • A kinematic fit procedure, a track veto and a set of discriminating • variables have been developed to improve the signal-background • ratio.

  11. The splash filter 2001 and Ng=3 • Before counting events we have found in 2001 data an unexpected background on the selected samples: • due to DAFNE BKG • triggering by itself • Not simulated in MC • Reduced to a negligible • level by splash filter • rmean > 70 cm • Emean > 70 MeV Systematics on counting checked by varying cut definition.

  12. Inclusive distributions: Data vs MC; Ng=3  DATA --- MC

  13. Inclusive distributions: Data vs MC; Ng=4  DATA --- MC

  14. Inclusive distributions: Data vs MC; Ng=5+6  DATA --- MC

  15. Trigger efficiency for KS2p0 • Before comparing data and MC rates, the calorimeter trigger • efficiency has been measured in data using events surviving the acceptance selection. • We first identify the clusters coming from KS or KL and then connect them to the trigger sectors following procedure used in Kloe Note 174 (2002). • The distribution of trigger sector fired by KS(KL) are calculated requiring that the KL(KS) alone satisfies the trigger condition. • The probability to trigger the events is then: Where eL,S(n) is the probability, calculated for the KL,S, of firing n trigger sectors.

  16. Trigger efficiency for KS2p0 Nγ εL(1) % εS(0)% εTRIG(%) 3 53.8±0.50.83 ±0.0799.56 ±0.04 453.6±0.30.04±0.0199.98 ±0.01 553.7±0.20.27±0.0299.86 ±0.01 653.8±0.70.85±0.1099.54 ±0.05 3-553.6±0.20.29±0.0199.85 ±0.01 4-553.6±0.20.05±0.0299.98 ±0.01 • • As expected εL does not depend on Nγ • • Highest inefficiency on the KS side is for Nγ different from 4 • A small difference on εTRIG expected for KS in 3π0 decay • Small difference from KS±/00 analysis due to different • acceptance selection.

  17. Cosmic veto efficiency for KS2p0 Sample Nobs Nv/PS ρ Nv(γ)/PS ρ(γ) DATA Ng=3 394503390.042590.0075 MC Ng=37666018960.0252430.0032 DATA Ng=4 868506240.0351450.0084 MC Ng=417000043980.0267260.0043 DATA Ng=520470015610.0373260.0079 MC Ng=5461800118700.02618470.0040 DATA Ng=6198901460.036360.0089 MC Ng=6378209770.0261640.0043

  18. Cosmic veto efficiency for KS2p0 Integrating Nγ between 3 and 6 we get: From these we have: • The data-MC discrepancy is due to both the KL-crash and the photons from KS This can have some effect for the KS in 3π0 •ρ(all)= 0.0255 ±0.0002 •ρ(γ) = 0.0039 ±0.0001 •ρ(Kcra) = 0.0216 ±0.0002 •ρ(all)= 0.0372 ±0.0007 •ρ(γ) = 0.0081 ±0.0003 •ρ(Kcra) = 0.0295 ±0.0003 DATA MC

  19. Stability of Ng along the run: Data vs MC • Before calculating the efficiency of the acceptance selection we have • controlled the stability of photon counting along the run as follows: • We have calculate the relative fraction of tagged events with a • photon multiplicity, Ng=K, with respect to the integral of events • with Ng between 3 and 6 • We have corrected the counting on data for (eTRIG × eCV) while in the • simulation we have applied the data-MC efficiency correction for • photons • We have subdivided the running period in bins of 10 pb-1

  20. Stability of Ng along the run: Data vs MC 3 4 3 4 5 6 5 6 2001 2002 Data = black points MC = red points

  21. Stability of Ng : Data vs MC Ng=3,4 the disagreement is not to large indicating an almost negligible bkg contamination (smaller than % ). Ng=5,6 we have a more pronounced data-MC discrepancy which is increasing in the 2002 period. These events are dominated by shower fragmentation and accidental coincidence of machine background clusters. To understand this point, we have determined the probability to get one, PATW(1), or two, PATW(2), accidental clusters in the time window by looking at the early out of time window. This has been done both on data and MC and reported to various MC meetings.

  22. Probability of Accidentals: Data vs MC (I) Prob Data 2001 % MC 2001 % Data 2002 % MC 2002 % 0.75 1.03 0.38 0.89 0.14 0.12 0.07 0.10 0.30 0.16 0.17 0.08 0.05 0.03 0.02 0.03 Depending on the period, the probability of accidental overlap between collision and machine bkg differs substantially between Data and MC. This difference is due to a software bug in the BGG selection procedure. An average correction along the run has been calculated comparing data-MC on the early out-of-time window.

  23. Probability of Accidentals: Data vs MC (II) • The behavior of the FR(5,6) along the running period is well reproduced • by the time dependence of the accidental rate 2001 data After Racci corr before

  24. Probability of splitting : Data vs MC Turning off in MC the counting of accidental and fragmented clusters, we get the true MC fraction for a given photon multiplicity. By inserting the PATW previously measured we can fit the FR(K) values in data and MC and get the probability for a cluster to create one (or more than one) fragment:

  25. Selection efficiency for KS 2p0 To measure the selection efficiency for KS2p0 decay due the acceptance requirements, we have used a sub-sample of 190000 events simulating the runs taken at the beginning of 2001. To simulate the KL-crash, we consider only the events with a KL-far decay. In the generation code we select the KS2p0 decay. We call this sample 2p0-crash. The selection efficiency for a given multiplicity of prompt photons, Ng=K, is then defined as:

  26. Selection efficiency for KS2p0  KS2p0 --- KSKL --- NO KSKL After egcor  FILFO rejected --- T0-step0  T0 stolen by acci cluster Estimated bkg contamination: - Ng = 3  1.2% (0.25%) - Ng =4  0.2% (0.1%) The bkg exp. in blue refer to the residual splashes in 2001 • To assign a syst. error to the selection efficiency, we consider: • Cluster efficiency correction • Probability of accidentals • overlap • Probability of splitting • FILFO • T0-step0 algorithm • T0-loss due to accidental • clusters

  27. Selection efficiency for KS2p0 Adding in quadrature all the sources of systematic error, we obtain: Using these results and the efficiency on trigger and cosmic veto, we can calculate the number of events of the normalization sample: This value enters directly in the upper limit calculation.

  28. Kinematic fit procedure A kinematic fit is applied on the Ks side requiring the conservation of 4-momentum (NDOF=11).  c2FIT -- KS3p0 (MC) -- MC BKG  DATA c2FIT< 30 is not enough (2/3 of bkg rejected)! Other discriminating variable have to be used: (c22p, c23p)

  29. The c22p and c23p The c22p is built selecting 4 out of 6 clusters which satisfies better the kinematics of KS  2p0 • The parameters used are: • mass distribution • opening angle between pions in • KS C.M. frame • 4-momentum conservation The calibration is done using KS2p0sample (see next slide) The c23p is based only on the 3 “best reconstructed” pion masses

  30. Calibration of c22p and c23p DATA MC Mp Mp MC DATA DE DE In the construction of c2 we use a different sigma for each sample. DATA and MC (OLDMC, NEWMC) (2001 ,2002 ).

  31. New Fake-Acci-Split calibration procedure Distribution of c22p versus c23p

  32. New Fake-Acci-Split calibration procedure • To better calibrate data and MC, we have also questioned how well the MC reproduces the amount of double shower fragments and double accidental clusters. To understand and calibrate this we have divided the MC KL-crash events into 2 further classes: • 2A: events of Ks2p0 in overlap with 2 accidental (~ 60% ) • 2S: events of Ks2p0 with 2 splitted clusters or 1 accidental + 1 splitted cluster (~ 35%) To do this, we perform a 3 components fit (2S, 2A and fake events)

  33. New Fake-Acci-Split calibration procedure c22p c22p DATA 2 S c23p c23p c22p c22p Fake 2 A c23p c23p

  34. Result of Fake-Acci-Split calibration ALL c22p<14 c23p c23p 14<c22p<40 c23p c23p  DATA -- MC ALL A good agreement is observed in each scatter plot region c22p>40

  35. Result of Fake-Acci-Split calibration c23p<4 c23p>4 c22p c22p ALL c22p  DATA -- MC ALL A good agreement is observed in each scatter plot region

  36. DATA-MC comparison at beginning of ana Sbox CSbox UP Cup Down CDown Summing up 2001-2002 for each MC, we can compare DATA with the two different MC productions. NEW OLD A reasonable data-MC comparison is found for both samples at the beginning of the analysis.

  37. Track veto effect on c22p vs c23p ALL c22p<14 c23p c23p 14<c22p<40 c22p>40 c23p c23p  DATA -- MC ALL We apply a track veto to reject events with tracks coming from IP. We reject events with: rPCA < 4 cm |ZPCA| < 10 cm

  38. Review of the analysis chain OLD counting Track Veto DECUT c2fit Sbox optimization NEW Track Veto Optimiz counting DE/sE c2fit Sbox DE DE/sE Since we have observed a little difference on sE between DATA and MC (and we have correct this effect into c2 definition), we have changed DE_CUT into DE_CUT/sE.

  39. Optimization The optimization is done (J. F. Grivaz and F. Le Diberder, LAL 92-37) with a factor two improvement on the MC statistics respect to the old analysis. In this case we obtain the best ratio between surviving background and signal efficiency with the following set of cuts: c23p< 4.64 12.07 < c22p< 60 c2fit < 40.43 DE/sE > 1.69 In this way we have a signal efficiency esig = (25.21 ± 0.18stat )% and we found2 eventswith3.13 ± 0.82statexpected by MC

  40. Data-MC comparison after each ANA step ALL c22p down c23p c23p c22p central c22p up c23p c23p  DATA -- MC ALL TRK veto

  41. Data-MC comparison after each ANA step ALL c22p down c23p c23p c22p central c22p up c23p c23p  DATA -- MC ALL TRK veto + DE/sE

  42. Data-MC comparison after each ANA step ALL c22p down c23p c23p c22p central c22p up c23p c23p  DATA -- MC ALL END OFANA

  43. Result of Fake-Acci-Split calibration c23p low c23p high c22p c22p ALL c22p  DATA -- MC ALL END OFANA

  44. Data-MC comparison after optimization Sbox CSbox UP Cup Down CDown Comparison between DATA and MC after the optimization procedure. Nobs = 2 Bexp = 3.13 ± 0.82

  45. Data-MC comparison after optimization This is the comparison at the end of analysis for the candidates events. MC normalized with fit weights. --- Data --- MC

  46. Systematic errors on Bkg We consider the following sources of systematic error in the determination of the background expectations at the end of the analysis chain (Bexp). • Composition of the MC sample • Effect of the track veto • Energy scale and resolution • Effect of the c2FIT cut

  47. Systematic error on Bkg: MC composition • A source of systematic error is given by the statistical uncertainty on • fit weights. Using the correlation matrix given by HMCMLL • procedure, we propagate these errors obtaining a systematic of • 0.05 events on Bexp. • We also tested the reliability of our fake KL–crash procedure by • directly require, on NEWMC only, the KL-crash tag on the sample. • A new calibration of the MC composition is performed. • The largest correction, with respect to the adopted analysis, is on the • determination of the fakes in the Sbox. We obtain a relative error on • fakes of (13±8)% which translates into a Bexp change of 0.06 • events.

  48. Systematic error on Bkg: Track veto To evaluate a systematic error on Bexp, we have built the cumulative curves for the rejected and survived events in Data and MC samples and we have calculated their ratio. Survived Rejected DE/sE DE/sE

  49. Systematic error on Bkg: Track veto The rejected sample are composed of 44% by fakes. This category become dominant at DE/sE>1.7 The ratio of the cumulatives gives 1.06, and provide a first estimate of the error to be assigned to the vetoed fakes.

  50. Systematic error on Bkg: Track veto The Data-MC ratio, Rb, of the population of the lowest b* side-band better represents the discrepancy on the fakes: Rb=(1.10±0.001)% The relative error to the number of fakes surviving the veto is: FTRV=0.76 is the fraction of vetoed events (MC). We obtain a systematic of 5% on Bexp.

More Related