1 / 16

Validating uncertain predictions

Validating uncertain predictions. Tony O’Hagan, Leo Bastos , Jeremy Oakley, University of Sheffield. Why am I here?. I probably know less about finite elements modelling than anyone else at this meeting But I have been working with mechanistic models of all kinds for almost 20 years

hagen
Download Presentation

Validating uncertain predictions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Validating uncertain predictions Tony O’Hagan, Leo Bastos, Jeremy Oakley, University of Sheffield

  2. Why am I here? • I probably know less about finite elements modelling than anyone else at this meeting • But I have been working with mechanistic models of all kinds for almost 20 years • Models of climate, oil reservoirs, rainfall runoff, aero-engines, sewer systems, vegetation growth, disease progression, ... • What I do know about is uncertainty • I’m a statistician • My field is Bayesian statistics • One of my principal research areas is to understand, quantify and reduce uncertainty in the predictions made by models • I bring a different perspective on model validation mucm.group.shef.ac.uk

  3. Some background • Models are often highly computer intensive • Long run times • FE models on fine grid • Oil reservoir simulator runs can take days • Things we want to do with them may require many runs • Uncertainty analysis • Exploring output uncertainty induced by uncertainty in model inputs • Calibration • Searching for parameter values to match observational data • Optimisation • Searching for input settings to optimise output • We need efficient methods requiring minimal run sets mucm.group.shef.ac.uk

  4. Emulation • We use Bayesian statistics • Based on a training sample of model runs, we estimate what the model output would be at all untried input configurations • The result is a statistical representation of the model • In the form of a stochastic process over input space • The process mean is our best estimate of what the output would be at any input configuration • Uncertainty is captured by variances and covariances • It correctly returns what we know • At any training sample point, the mean is the observed value • With zero variance mucm.group.shef.ac.uk

  5. 2 code runs • Consider one input and one output • Emulator estimate interpolates data • Emulator uncertainty grows between data points mucm.group.shef.ac.uk

  6. 3 code runs • Adding another point changes estimate and reduces uncertainty mucm.group.shef.ac.uk

  7. 5 code runs • And so on mucm.group.shef.ac.uk

  8. MUCM • The emulator is a fast meta-model but with a full statistical representation of uncertainty • We can build the emulator and use it for tasks such as calibration with far fewer model runs than other methods • Typically 10 or 100 times fewer • The RCUK Basic Technology grant Managing Uncertainty in Complex Models is developing this approach • http://mucm.group.shef.ac.uk • See in particular the MUCM toolkit mucm.group.shef.ac.uk

  9. Validation • What does it mean to validate a simulation model? • Compare model predictions with reality • But the model is always wrong • How can something which is always wrong ever be called valid? • Conventionally, a model is said to be valid if its predictions are close enough to reality • How close is close enough? • Depends on purpose • Conventional approaches to validation confuse the absolute (valid) with the relative (fit for this purpose) • Let’s look at an analogous validation problem mucm.group.shef.ac.uk

  10. Validating an emulator • What does it mean to validate an emulator? • Compare the emulator’s predictions with the reality of model output • Make a validation sample of runs at new input configurations • The emulator mean is the best prediction and is always wrong • But the emulator predicts uncertainty around that mean • The emulator is valid if its expressions of uncertainty are correct • Actual outputs should fall in 95% intervals 95% of the time • No less and no more than 95% of the time • Standardised residuals should have zero mean and unit variance • See Bastos and O’Hagan preprint on MUCM website mucm.group.shef.ac.uk

  11. Validation diagnostics mucm.group.shef.ac.uk

  12. Validating the model • Let’s accept that there is uncertainty around model predictions • We need to be able to make statistical predictions • Then if we compare with observations we can see whether reality falls within the prediction bounds correctly • The difference between model output and reality is called model discrepancy • It’s also a function of the inputs • Like the model output, it’s typically a smooth function • Like the model output, we can emulate this function • We can validate this mucm.group.shef.ac.uk

  13. Model discrepancy • Model discrepancy was first introduced within the MUCM framework in the context of model calibration • Ignoring discrepancy leads to over-fitting and over-confidence in the calibrated parameters • Understanding that it is a smooth error term rather than just noise is also crucial • To learn about discrepancy we need a training sample of observations of the real process • Then we can validate our emulation of reality using further observations • This is one ongoing strand of the MUCM project mucm.group.shef.ac.uk

  14. Beyond validation • An emulator (of a model or of reality) can be valid and yet useless in practice • Given a sample of real-process observations, we can predict the output at any input to be the sample mean plus or minus two sample standard deviations • This will validate OK • Assuming the sample is representative • But it ignores the model and makes poor use of the sample! • Two valid emulators can be compared on the basis of the variance of their predictions • And declared fit for purpose if the variance is small enough mucm.group.shef.ac.uk

  15. In conclusion • I think it is useful to separate the absolute property of validity from the relative property of fitness for purpose • Model predictions alone are useless without some idea of how accurate they are • Quantifying uncertainty in the predictions by building an emulator allows us to talk about validity • Only valid statistical predictions of reality should be accepted • Model predictions with a false measure of their accuracy are also useless! • We can choose between valid predictions on the basis of how accurate they are • And ask if they are sufficiently accurate for purpose mucm.group.shef.ac.uk

  16. Advertisement • Workshop on emulators and MUCM methods • “Uncertainty in Simulation Models” • Friday 10th July 2009 • 10.30am - 4pm • National Oceanography Centre Southampton • http://mucm.group.shef.ac.uk/Pages/Project_News.htm • Please register with Katherine Jeays-Ward • (k.jeays-ward@sheffield.ac.uk) by 3rd July 2009 • Registration is free, and lunch/refreshments will be provided mucm.group.shef.ac.uk

More Related