1 / 18

Cosmological Model Selection

Cosmological Model Selection. David Parkinson (with Andrew Liddle & Pia Mukherjee). Outline. The Evidence: the Bayesian model selection statistic Methods Nested Sampling Results. Concordance Cosmology. A flat universe composed of baryons , cold dark matter and dark energy

jfaust
Download Presentation

Cosmological Model Selection

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Cosmological Model Selection David Parkinson (with Andrew Liddle & Pia Mukherjee)

  2. Outline • The Evidence: the Bayesian model selection statistic • Methods • Nested Sampling • Results

  3. Concordance Cosmology • A flat universe composed of baryons, cold dark matter and dark energy • Gaussian, adiabatic and nearly scale invariant initial perturbations

  4. Model Extensions • Do we really need only 5 numbers to describe the universe (b, CDM, H0, As, )? • Extra dynamic properties: curvature (k), massive neutrinos (M), dynamic dark energy (w(z)) etc.. • More complex initial conditions: tilt (ns) and running (nrun) of the adiabatic power spectrum, entropy perturbations etc.. • How do we decide if these extensions are justified?

  5. Bayesian Statistics • Two identical urns A and B • A contains 99 black balls and 1 white; B has 99 white and 1 black • P(black|urn A) = 0.99 • Now shuffle the two urns, and pull out a ball from it. Suppose it is black. What is the probability it is from urn A? • Bayesian statistics allows probabilities not just of data, but also parameters and models

  6. Bayes’ Theorem • Bayes’ theorem gives the posterior probability of the parameters () of a model (H) given data (D) • Marginalizing over  the evidence is • Evidence = average likelihood of the data over the prior parameter space of the model

  7. Jeffrey’s Scale • The evidence (or model likelihood) updates the prior model probability through Bayes’ theorem to give the posterior probability of the model. • The ratio of two model posteriors is known as the Bayes’ factor: • Jeffrey’s scale

  8. Occam’s Razor • Models are rewarded for fitting the data well, and also their predictive-ness Best fit likelihood Occam Factor

  9. Lindley’s Paradox • Consider three data sets, measuring. By sampling statistics, all three rule out =0 (the simpler model) at 95% confidence. But B01=0.5, 1.8, 18 resp. in favor of the more complex model as the data improves. Trotta 2007

  10. Methods • The Laplace Approximation • Assumes that the P(|D,M) is a multi-dimensional Gaussian • The Savage-Dickey Density Ratio • Needs separable priors and nested model • (and the reference value to be in the high likelihood region of the more complex model for accuracy) • Thermodynamic Integration • Needs a series of MCMC’s at different temperatures • Accurate but computationally very intensive • VEGAS • Likelihood surface needs to be “not too far” from Gaussian

  11. Nested Sampling Nested Sampling (Skilling 2004/5) performs the integral using Monte-Carlo samples to trace the variation in likelihood with prior mass (X), and peeling away thin nested iso-surfaces of equal likelihood. • The prior mass is sampled uniformly • The evidence is incremented using minimum likelihood point • Discarding this point reduces X by a known factor • A new random point is found with L > the previous minimum likelihood

  12. Nested Sampling • Each iteration reduces X by a factor N/(N+1) (on average), where this factor is the expectation value of the largest of N sampled from U(0,1). • The N ‘live’ points migrate to the high likelihood regions, always sampling uniformly from the remaining prior volume (X).

  13. Movie

  14. Stopping Criterion TOTAL We stop when some accuracy criterion is met on the sum of the accumulated evidence from discarded points, and the evidence from the remaining points. Numerical uncertainty is dominated by the Poisson variability in the number of steps to reach the posterior where is the logarithm of the compression ratio. Calculation proceeds in this direction

  15. Posterior Samples The nested sampling algorithm also generates a set of posterior samples for parameter estimation:

  16. WMAP alone cannot distinguish between HZ and a tilted (ns) model Some evidence for ns≠1 from WMAP3+extra, but only at odds of 8:1. Inflation predicts both scalar (ns) and tensor (r) perturbations. HZ is preferred, unless a log prior is used on r. Applications: WMAP3

  17. Dark Energy Models w(a) = w0 + (1-a)wa for z 0 to 2 Or 78%, 21% and 1% for models I, II & V Liddle, Mukherjee, Parkinson & Wang 2006

  18. Conclusions • Model selection (via Bayesian evidences) and parameter estimation are two levels of inference. • The nested sampling scheme computes evidences accurately and efficiently; also gives parameter posteriors (www.cosmonest.org) • Applications - simple models still favoured • model selection based forecasting • Bayesian model averaging • many others… Foreground contamination, cosmic topology, cosmic strings…

More Related