1 / 34

The good sides of Bayes

The good sides of Bayes. Jeannot Trampert Utrecht University. Bayes gives us an answer! Example of inner core anisotropy. Normal mode splitting functions are linearly related to seismic anisotropy in the inner core.

verity
Download Presentation

The good sides of Bayes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The good sides of Bayes Jeannot Trampert Utrecht University

  2. Bayes gives us an answer! Example of inner core anisotropy

  3. Normal mode splitting functions are linearly related to seismic anisotropy in the inner core The kernels Kα, Kβ and Kγ are of different size, hence regularization affects the different models differently

  4. Regularized inversion

  5. Full model space search (NA, Sambridge 1999)

  6. Resolves 20 year disagreement between body wave and normal mode data(Beghein and Trampert, 2003)

  7. Bayes or not to Bayes?We need proper uncertainty analysis to interpret seismic tomography probability density functions for all model parameters

  8. Do models agree?No knowledge of uncertainty implies subjective comparisons.

  9. Partial knowledge of uncertainty allows hypothesis testing

  10. Deschamps and Tackley, 2009

  11. Mean density model separated into its chemical and temperature contributions (full pdf obtained with NA) Trampert et al, 2004)

  12. Deschamps and Tackley, 2009

  13. Full knowledge of uncertainty allows to evaluate the probability of overlap or consistency between models

  14. What is uncertainty? Consider a linear problem where d are data, m the model, G partial derivatives and e the data uncertainty The estimated solution is where m0 is a starting model and L the linear inverse operator

  15. What is uncertainty? This can be rewritten as where (I-R) is the null-space operator Resulting in a formal statistical uncertainty expressed with covariance operators as

  16. What is uncertainty?

  17. How can we estimate uncertainty? • Ignore it: should not be an option but is the common approach • Try and estimate Dm: • Regularized extremal bound analysis (Meju, 2009) • Null-space shuttle (Deal and Nolet, 1996) • Probabilistic tomography • Neighbourghood algorithm (Sambridge, 1999) • Metropolis (Mosegaard and Tarantola, 1995) Neural Networks (Meier et al., 2007)

  18. The most general solution of an inverse problem (Bayes) Tarantola, 2005

  19. A full model space search should estimate • Exhaustive search • Brute force Monte Carlo (Shapiro and Ritzwoller, 2002) • Simulated Annealing (global optimisation with convergence proof) • Genetic algorithms (global optimisation with no covergence proof) • Neighbourhood algorithm (Sambridge, 1999) • Sample r(m) and apply Metropolis rule on L(m). This will result in importance sampling of s(m) (Mosegaard and Tarantola, 1995) • Neural networks (Meier et al., 2007)

  20. The neighbourhood algorithm (NA): Sambridge 1999 Stage 1: Guided sampling of the model space. Samples concentrate in areas (neighbourhoods) of better fit.

  21. The neighbourhood algorithm (NA): Stage 2: importance sampling Resampling so that sampling density reflects posterior 2D marginal 1D marginal

  22. Advantages of NA • Interpolation in model space with Voronoi cells • Relative ranking in both stages (less dependent on data uncertainty) • Marginals calculated by Monte Carlo integration  convergence check • Marginals are a compact representation of the seismic data and prior rather than a model

  23. Example: A global mantle model • Using body wave arrival times, surface wave dispersion measurements and normal mode splitting functions • Same mathematical formulation

  24. Mosca et al., 2011

  25. Mosca et al., 2011

  26. What does it all mean? • Mineral physics will • tell us! • Thermo-chemical • parameterization: • Temperature • Fraction of Pv (pPv) • Fraction of total Fe

  27. Example: Importance sampling using the Metropolis rule (Mosegaard and Tarantola, 1995)

  28. Disadvantages of NA and Metropolis Works only on small linear and non-linear problems (less than ~50 parameters)

  29. The neural network (NN) approach: Bishop 1995, MacKay 2003 • A neural network can be seen as a non-linear filter between any input and output • The NN is an approximation to a non-linear function g where d=g(m) • Works on forward or inverse function • A training set (contains the physics) is used to calculate the coefficients of the NN by non-linear optimisation

  30. Properties of NN • Dimensionality is not a problem because NN approximates a function and not a data prediction! • Flexible: invert for any combination of parameters • 1D or 2D marginal only

  31. Mantle transition zone discontinuities

  32. Probabilistic tomography using Bayes’ theorem is possible but challenges remain • Control the prior and data uncertainty • Full pdfs in high dimensions • Interpret and visualize the information contained in the marginals

More Related