1 / 13

Discussion on averages

Discussion on averages. ATLAS Statistics Forum 22 Feb 2018. Glen Cowan RHUL Physics. m t discussion. 6 Feb 18 Stat Forum (GC and NB) met with top mass analysers and others to discuss a method for averaging m t values.

slangton
Download Presentation

Discussion on averages

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Discussion on averages ATLAS Statistics Forum 22 Feb 2018 Glen Cowan RHUL Physics 22 Feb 2018 / Disussion on averages

  2. mt discussion 6 Feb 18 Stat Forum (GC and NB) met with top mass analysers and others to discuss a method for averaging mt values. The proposed method for performing the combination in the upcoming paper was initially discussed in Sect.9 of: https://link.springer.com/article/10.1140%2Fepjc%2Fs10052-014-3004-2 and was already used by ATLAS in the top quark mass conference note for Top 2017: https://cds.cern.ch/record/2285809 A measurement is used only if its inclusion in the average results in an improvement in the average’s total uncertainty that is not much smaller than the uncertainty of the total uncertainty. This resulted in using only 3 out of 6 mt measurements, and a lengthy discussion. 22 Feb 2018 / Disussion on averages

  3. Initial thoughts (GC) In general the choice of an estimator is not fixed, so there is no fundamental argument for or against BLUE vs. Maximum Likelihood (ML). Nevertheless, the individual measurements are based on ML, and so the natural (?) way to combine should be to form the full likelihood function that represents the joint observation of the individual estimates (or the closest approximation to this given the available information). The ML “average” is not guaranteed to have zero bias, but the bias should be small to the extent that the estimator is approximately linear in the measurements. For some special cases one can show that ML and BLUE are identical (e.g., uncorrelated stat. and sys. errors); (conjecture: in all cases of practical interest they should be very close). 22 Feb 2018 / Disussion on averages

  4. Starting points and extensions The basic starting point is the likelihood (for both frequentist and Bayesian); her stick with frequentist and (usually) use maximum of log L as estimator. In some limiting cases this will reduce to BLUE (Gaussian measurements; log L quadratic in measurements and parameters). If one modifies either approach they do not in general retain their equivalence, e.g., inclusion of errors on errors, use of non-Gaussian distributions in the likelihood, e.g., gamma, log-normal, Student’s t. Extensions to likelihood usually easy to make and interpret. 22 Feb 2018 / Disussion on averages

  5. ML example Suppose we measure uncorrelated yi ~ Gauss(μ, σi2), i = 1,..., N so that the likelihood is Maximizing the likelihood is equivalent to minimizing This gives a linear unbiased estimator with minimum variance (i.e., equivalent to BLUE): 22 Feb 2018 / Disussion on averages

  6. ML examplewith systematics Supposenow yi ~ Gauss(μ + θi, σi2),and we have estimates of θi, ui ~ Gauss(θi, σu,i2), so that the likelihood becomes After profiling over the nuisance parameters θ, one obtains the same result as before but with So again this is the same as BLUE, extended to use addition of the statistical and systematic errors in quadrature. 22 Feb 2018 / Disussion on averages

  7. Example extension: PDG scale factor Suppose we do not want to take the quoted errors as known constants. Scale the variances by a factorϕ, The likelihood function becomes The estimator for μ isthe same as before; for ϕ ML gives which has a bias; is unbiased. ^ The variance ofμ is inflated by ϕ: 22 Feb 2018 / Disussion on averages

  8. Example extension: errors on errors Suppose we want to treat the systematic errors σu,i as uncertain. For example let λi = lnσu,i, and suppose we have a Gaussian distributed estimate vi ~ Gauss(λi , σv,i). The likelihood is where the σvi represent the relative error on the sys. error. lnL not only sum of squares; not clear how this relates to BLUE. The errors on the sys. errors inflate the error of the estimate for μ. 22 Feb 2018 / Disussion on averages

  9. Example extension: errors on errors Toy example: y1 = 0.9 ± 0.1 (stat) ± 0.1 (sys) y2 = 1.1 ± 0.1 (stat) ± 0.1 (sys) 22 Feb 2018 / Disussion on averages

  10. Non-Gaussian likelihoods Tails of Gaussian fall of very quickly; not always most realistic model esp. for systematics. Can try e.g. Student’s t, ν = 1 gives Cauchy, ν large gives Gaussian. Can either fix ν or constrain like other nuisance parameters. ML no longer equivalent to least-squares, BLUE. 22 Feb 2018 / Disussion on averages

  11. Gamma distribution for sys. errors First attempt to treat estimates of sys. errors as log-normal distributed has long tail towards large errors; maybe more realistic to use gamma distribution: (here k = α, θ = 1/β) x Take s ~ Gamma(α, β) as “measurement” of sys. error σu with relative unc. in sys. error r, α= 1/r2, β= α/σu. 22 Feb 2018 / Disussion on averages

  12. Likelihood with gamma distributed sys. err. Using a gamma distribution for the sys. errors gives For each measurement need relative uncertainty in sys. error r, then α= 1/r2, β= α/σu. 22 Feb 2018 / Disussion on averages

  13. Likelihood with gamma distributed sys. err. Error on average inflated when relative error on sys. error is large, but only when the points to be averaged show a large variance. 22 Feb 2018 / Disussion on averages

More Related