1 / 11

Significance: Does  2 give  2 ?

Significance: Does  2 give  2 ?. Roger Barlow Montreal Collaboration meeting June 2006. How it started. Analysis looking for bumps Pure background gives  2 old of 60 for 37 dof (Prob 1%). Not good but not totally impossible

veta
Download Presentation

Significance: Does  2 give  2 ?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Significance: Does 2 give 2? Roger Barlow Montreal Collaboration meeting June 2006

  2. How it started Analysis looking for bumps Pure background gives 2old of 60 for 37 dof (Prob 1%). Not good but not totally impossible Fit to background+bump (4 new parameters) gives better 2new of 28 Question: Is this improvement significant? Answer: Significance is(2 new- 2old ) = (60-28)=5.65 Schematic only!! No reference to any other resonance, real or fictitious Puzzle. How does a 3 sigma discrepancy become a 5 sigma discovery?

  3. Justification? • ‘We always do it this way’ • ‘Belle does it this way’ • ‘CLEO does it this way’

  4. Possible Justification Likelihood Ratio Test a.k.a. Maximum Likelihood Ratio Test If M1 and M2 are models with max. likelihoods L1 and L2 for the data, then 2ln(L2 / L1) is distributed as a 2 with N1 - N2 degrees of freedom Provided that • M2 contains M1  • Ns are large  • Errors are Gaussian  • Models are linear 

  5. Does it matter? • Investigate with toy MC • Generate with Uniform distribution in 100 bins, <events/bin>=100. 100 is large and Poisson is reasonably Gaussian • Fit with • Uniform distribution (99 dof) • Linear distribution (98 dof) • Cubic (96 dof) • Flat+Gaussian (96 dof) Cubic is linear(!) a0+a1 x + a2 x2 + a3 x3 Gaussian is not linear in  and 

  6. One ‘experiment’ Flat +Gauss Flat linear Cubic

  7. Calculate 2 probabilities of models on their own From the 2 and Ndof for the 4 models. Not bad. Show Gaussian approximation is working fairly well. Ideal would be flat for all

  8. Calculate 2 probabilities of differences in models Compare linear and uniform models. 1 dof. Probability flat Method OK Compare flat+gaussin and uniform models. 3 dof. Probability very unflat Method invalid Peak at low P corresponds to large 2 i.e. false claims of significant signal Compare cubic and uniform models. 3 dof. Probability flat Method OK

  9. Not all parameters are equally useful If 2 models have the same number of parameters and both contain the true model, one can give better results than the other.This tells us nothing about the data Shows 2 for flat+gauss v. cubic Same number of parameters Flat+gauss tends to be lower

  10. Helpful (?) way of thinking • Method is valid if you fix Gaussian position and width and just vary size (1 dof – and linear). OK for investigating a known peak • Intuitively sensible for small : you fit the known bin exactly. Contribution (1) to 2 is zapped. • If you keep  small and float  this gives your fit the power to zap the bin with highest 2..That is larger, tricky to calculate, and depends on the number of bins. Result not guaranteed to be 2

  11. Conclusion: Does 2 give 2? No

More Related