1 / 47

A limit on the diffuse flux of muon neutrinos using data from 2000 – 2003 Jessica Hodges

A limit on the diffuse flux of muon neutrinos using data from 2000 – 2003 Jessica Hodges University of Wisconsin – Madison Baton Rouge Collaboration Meeting April 13, 2006. Light at the end of the tunnel. downgoing muons and neutrinos. m. n. E -2. E -3.7. m. m. “Signal”.

mbutera
Download Presentation

A limit on the diffuse flux of muon neutrinos using data from 2000 – 2003 Jessica Hodges

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A limit on the diffuse flux of muon neutrinos using data from 2000 – 2003 Jessica Hodges University of Wisconsin – Madison Baton Rouge Collaboration Meeting April 13, 2006 Light at the end of the tunnel

  2. downgoing muons and neutrinos m n E-2 E-3.7 m m “Signal” Search for a Diffuse Flux of Neutrinos (TeV – PeV) 2000 – 2003 : 807 days of detector livetime Monte Carlo simulation Atmospheric Muons: muons created when cosmic rays hit the atmosphere, including simulation of simultaneous downgoing muons Atmospheric Neutrinos: neutrinos created when cosmic rays hit the atmosphere. Have an E-3.7 energy spectrum. Signal Neutrinos: extraterrestrial neutrinos with an E-2 energy spectrum <1> Remove downgoing events with a zenith angle cut and by requiring high quality event observables. <2> Separate atmospheric neutrinos from signal by an energy cut.

  3. After Event Quality Cuts Upgoing events Horizontal events The zenith angle distribution of high quality events before an energy cut is applied. The signal test flux is E2 = 10-6 GeV cm-2 s-1 sr-1.

  4. After event quality cuts, this is the final sample of upgoing events for 4 years. linear log keep keep Signal hypothesis E2Φ= 10-6 • Key Elements for Setting a Limit: • number of actual data events observed • number of background events predicted • 3) number of signal events predicted given the signal strength that you are testing

  5. How to go calculate the number of background events in the final sample… 1) Count the number of data events above and below the final energy cut (NChannel >= 100) 2) Count the number of atmospheric neutrinos above and below the final cut. 3) Apply a scale factor to the low energy Monte Carlo events so that the number of events exactly matches the low energy data. 4) Apply this same scale factor to the background Monte Carlo above 100.This is the number of background events (b) that goes in the final computation of the limit. This sounds very easy, but what does it mean to “normalize the Monte Carlo”?

  6. Scaling the low energy atmospheric neutrino Monte Carlo prediction to the low energy data can mean either… You believe the atmospheric neutrino theory to be correct and you are “correcting” the detector acceptance or efficiency. (This could be many factors: ice, muon propagation, OM sensitivity….) This was done in the 1997 B10 analysis. You are “correcting” the theoretical flux that went in to the Monte Carlo prediction because you believe that the theorists incorrectly predicted the atmospheric neutrino flux.

  7. Two Interpretations of What it Means to Scale the Low Energy Monte Carlo Events to the Low Energy Data Atms MC Atms MC Data Data Signal Signal Apply the scale factor to correct for detector efficiency. DO apply the correction factor to the signal since we are correcting the entire detector efficiency or acceptance. Apply the scale factor to correct the theory that predicts the atmospheric neutrino flux. Do NOT apply the correction to the signal since it was meant for only atmospheric neutrinos. Atms MC Atms MC Data Data Signal Signal

  8. We chose to account the scale factor to the uncertainties in the atmospheric neutrino flux. Hence, I will NOT apply the scale factor to the signal Monte Carlo. This means that any uncertainties in our detection of the predicted flux must be accounted for separately.

  9. Key Elements for Setting a Limit: • number of actual data events observed • number of background events predicted • 3) number of signal events predicted given the signal strength that you are testing • 2 & 3 are based on Monte Carlo simulations that contain many uncertain inputs! • We must consider how systematic errors would change the amount of signal or background in the final sample. That’s easy! 6 events observed

  10. First, let’s consider the uncertainties in the background prediction. Every model or uncertainty that is applied to the background spectrum affects the number of events that will survive to the final event sample.

  11. In the past, most AMANDA analysis have used the Lipari model for atmospheric neutrinos. Instead, I will use the more up-to-date calculations done by two different groups. Barr, Gaisser, Lipari, Robbins, Stanev 2004 BARTOL Honda, Kajita, Kasahara, Midorikawa 2004 HONDA 1 background model (Lipari) Bartol Honda There are now 2 background predictions.

  12. The atmospheric neutrino flux models (Bartol and Honda) are affected by: 1) uncertainties in the cosmic ray spectrum 2) uncertainties about hadronic interactions Cosmic Ray Proton Flux short dashed green line = old HONDA pink solid line = HONDA 2004 dashed green line = BARTOL 2004 taken from HKKM 2004 paper

  13. Uncertainties in the cosmic ray spectrum and hadronic interactions were estimated as a function of energy. Percentage Uncertainty in the Atmospheric Neutrino Flux Log10 (Eν ) Every background Monte Carlo event can be weighted with this function by using the event’s true energy.

  14. Bartol Max Bartol Bartol Min Honda Max Honda Honda Min If every background event is weighted UP by its maximum error, then you can get a new, higher prediction of the background. Every event can also be weighted DOWN by its maximum amount of error. This gives a new, minimum prediction. Bartol Honda There are now 6 background predictions.

  15. Our Monte Carlo simulation is NOT perfect. Reasons for disagreement between data and Monte Carlo: Ice properties, muon propagation, OM sensitivity, other unknowns??? Consider what happens when you cut on a distribution that does not show perfect agreement. cut keep BLUE is the true distribution (data) and orange is the Monte Carlo. A cut on this distribution would yield TOO MANY Monte Carlo events compared to the truth (data).

  16. Fortunately, I have a good sample of downgoing muons and minimum bias data that can be used to study the uncertainties in my cuts! Data Atms μ I have performed an inverted analysis to select the highest quality downgoing events. All cuts and reconstructions are the same as the upgoing analysis – just turned upside-down.

  17. Examining the Cuts with Downgoing Muons See disagreement at the final cut level. Go back to the level before events quality cuts were made. Estimate a percentage shift for the Monte Carlo in each parameter that will provide better data – MC agreement . Apply the shifted Monte Carlo cuts at the final cut level. Median Resolution (degrees)

  18. Examining the Cuts with Downgoing Muons See disagreement at the final cut level. Go back to the level before events quality cuts were made. Estimate a percentage shift for the Monte Carlo in each parameter that will provide better data – MC agreement . Apply the shifted Monte Carlo cuts at the final cut level. Ndirc : Number of direct hits

  19. “Shift” the downgoing Monte Carlo in the parameters that show disagreement. This is tricky because you must find a way to shift each parameter into agreement without creating disagreement in other parameters. Estimate a correction to the Monte Carlo for each parameter before any cuts are applied. Apply all of the shifted cuts at the final cut level and see if all of the distributions show agreement. RESULTING SHIFT showing good agreement across all important parameters: Number of direct hits (ndirc) -> 1.1*ndirc Smoothness of hits along track (smootallphit) -> 1.08*smootallphit Median Resolution -> 1.05*med_res Likelihood Ratio (up to down) -> 1.01*L.R.

  20. Bartol Max Bartol Bartol Min Honda Max Honda Honda Min I will use the modified cuts from the downgoing analysis to apply an additional uncertainty on my upgoing events (both signal and background). Normal MC Shifted MC Each model can be considered WITH and WITHOUT the Monte Carlo shifted on those 4 parameters. There are now 12 background predictions. There are now 2 signal predictions. Signal (shifted MC) Signal (normal)

  21. Number of Events in the Final Data Set (Nch >= 100) Average Background = 6.12 Average Signal = 66.7

  22. These are 12 background predictions for the final sample, ranging from 4.52 to 7.79. Red = background from normal cuts Blue = background from shifted MC cuts

  23. Another look at the inverted analysis…. does the detector have a linear response in NChannel? log linear Here, you see the NChannel comparison between the minimum bias data and dCorsika atmospheric muon simulation.

  24. It would be desirable for the ratio of data to atmospheric muons as a function of NChannel to be flat. This suggests that we should not use events between 0 and 50 channels hit to perform the upgoing normalization. A fit to the ratio as a function of NChannel is not exactly flat, but its slope is very small. How does this affect the signal and background predictions?

  25. I used the fit as a function of NChannel to scale up the Monte Carlo from the upgoing analysis. How the Upgoing Atmospheric Neutrinos Responded…. This only caused a small change in the number of background events predicted to be in my final sample. The uncertainty due to NChannel being a non-linear parameter is at most 10%. This is much less than the other errors I have been considering and I will not worry about it. How the Upgoing Signal Neutrinos Responded… The number of signal events predicted above the NChannel cut changed from 68.4 events to 85.6 events. This is a 25% error.

  26. There may be additional detector effects which mean that our signal efficiency is not 1.0. Changing the OM sensitivity seems to have a linear effect on the NChannel spectrum of the signal. Consider the uncertainty in the number of signal events predicted in the final sample from this effect to be 10%. 25% uncertainty due to NChannel non-linearity uncertainties uncertainty in our overall detection of the signal (102 + 252) 1/2 = 27 % Now it’s time to build the confidence belt…..

  27. How to include systematic errors in confidence belt construction… Systematic Errors included by the methods laid out in: Cousins and Highland (Nucl. Instrum. Methods Phys. Res. A, 1992) Conrad et al. (Phys. Rev. D, 2003) Hill (Phys. Rev. D, 2003) P (x | μ, P(ε,b)) = ∫ P (x | ε’μ + b’) P(ε’,b’) dε’ db’ I have found 12 different background predictions, b, and 3 different values for the signal efficiency, ε. By integrating over these options, I can include systematic errors in my confidence belt construction. This can be simplified with a summation and the Poisson formula.

  28. The construction of the Feldman-Cousins confidence belt relies on the Poisson distribution. (μ+b)x e -(μ+b) x! P ( x | μ + b) = At every value of the signal strength, μ, you can calculate a probability distribution function. P( x | μ+ b) μ, Signal strength X (number observed)

  29. Applying the efficiency error to the PDF…. ( μ + b ) ( εμ + b ) where εis the efficiency uncertainty on the signal ε= 0.73, 1.00 or 1.27 Why not put a factor of εon the background? 1) Linear nch-dependent effects in the detector will be removed by normalization at low Nch. 2) Non-linear nch-dependent effects were computed to be very small compared to the other errors in the background that we are considering.

  30. There are now 12 values for the background prediction and 3 values for the signal efficiency. This makes a total of 36 values of (εμ + b) that you can use to construct the confidence belt. Here you see the PDF for μ = 0.5. Three PDFs are averaged into one (the magenta line). For my analysis, 36 PDFs are averaged into one for every value of μ.

  31. My Sensitivity Event upper limit = 5.78 The most probable observation if μ= 0 is 6. (This is also the median.) The sensitivity is the maximum signal strength μ that is consistent with the most probable observation if there is no signal (μ= 0).

  32. My Limit = My Sensitivity Event upper limit = 5.78 6 events were observed The upper limit is the maximum signal strength μ that is consistent with 6 events observed.

  33. No Systematic Errors Event upper limit = 4.91 Limit with no systematic errors This confidence band, without systematic errors, is not as wide. (The background assumption is the average of Bartol Central + Honda Central)

  34. E2Φ< (E2Φtest) * event upper limit nsignal E2Φ< (10-6) * 5.78 66.7 E2Φ90% (E) < 8.7 x 10-8 GeV cm-2 s-1 sr-1 Limit with no systematic errors = 10-6 * 4.91 / 68.4 = 7.2 x 10-8

  35. [Gev] ) 90% signal region The limit is valid over the region that contains 90% of the signal in the final data sample (Nch>=100). 90% region = 104.2 GeV to 106.4 GeV = 15.8 TeV to 2.51 PeV

  36. Limit from this analysis

  37. Testing signal models other than E-2 These models have not yet been unblinded, but the sensitivity and suggested NChannel cut are listed.

  38. Many thanks... Thanks to Gary Hill – who says he “always has time for another question” Thanks to Teresa Montaruli for help with the neutrino flux models and their uncertainties. Thanks to the “diffuse” discussion group – Gary, Teresa, Chris, Paolo, Albrecht and Francis Thanks to John and Jim for your advice in the office everyday…

  39. Ndirc > 13

  40. Ldirb > 170

  41. abs(Smootallphit) < 0.250

  42. Median resolution < 4.0

  43. No Cogz cut

  44. Likelihood ratio cut was zenith dependent Jkchi (down - Bayesian) – Jkchi (up - Pandel) >-38.2*cos(Zenith[7]/57.29)+27.506

  45. ~123o ~100o ~180o Zenith > 100

  46. FINAL CUTS: Ndirc (Pandel) > 13 Ldirb (Pandel) >170 abs (Smootallphit (Pandel) ) < 0.250 Zenith (Pandel) > 100o Median_resolution (P08err1,P08err2) < 4.0o Jkchi (down - Bayesian) – Jkchi (up - Pandel) >-38.2*cos(Zenith[7]/57.29)+27.506 Nch>=100

More Related