1 / 21

assessing uncertainty in bottom-up modelling crop- and grasslands

assessing uncertainty in bottom-up modelling crop- and grasslands. M. Wattenbach a , Pia Gottschalk a , Fred Hattermann b , Claus Rachimow b , Michael Flechsig b , Astley Hastings a , Pete Smith a

camdyn
Download Presentation

assessing uncertainty in bottom-up modelling crop- and grasslands

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. assessing uncertainty in bottom-up modellingcrop- and grasslands M. Wattenbacha, Pia Gottschalka, Fred Hattermannb, Claus Rachimowb, Michael Flechsigb, Astley Hastingsa, Pete Smitha a University of Aberdeen, School of Medicine and Life Science, Department of Plant and Soil Science, Cruickshank Building, St. Machar Drive, Aberdeen, AB24 3UU, UK, m.wattenbach@abdn.ac.uk b Potsdam Institute for Climate Impact Research, Potsdam,14473, Telegrafenberg, Germany CarboEurope meeting, Crete. 2006

  2. outline • Definitions and background • Uncertainty • Sources of uncertainty • Some case studies • A concept for a framework approach a step towards comparability of model results • Conclusions • Things I learned recently don’t really understand but think might be important

  3. Uncertainty • Uncertainty: the state of being unsure of something • In field science (ISO 1995 - the GUM ): “Uncertainty: parameter, associated with the result of a measurement, that characterizes the dispersion of the values that could reasonably be attributed to the measurand” • the term parameter may be, for example, a standard deviation (or a given multiple of it), or the half-width of an interval having a stated level of confidence. • two ways of the evaluation of uncertainty: • (A) is based on a series of measurements and their associated variance • second way which can also be expressed as standard deviation is based on expert knowledge (B) using all available sources

  4. Uncertainty is not Error Error refers to the imperfection of a measurement due to systematic or random effects in the process of measurement. The random component is caused by variance and can be reduced by an increased number of measurements as we can reduce the systematic component if it occurs from a recognizable process. The uncertainty in the result of a measurement on the other hand arises from the remaining variance in the random component and the uncertainties connected to the correction for systematic effects (ISO 1995).

  5. scenario uncertainty - type D measurement model Scenario uncertainty baseline uncertainty - type C scientific judgement uncertainty – type B measured/statistical uncertainty - type A conceptual uncertainty - type E Ecosystem Model Definition of model Uncertainty

  6. example study One Model on site • DNDC at the Oensingen cropland site (Hastings et al. submitted and Wattenbach et al. poster session) Cross model comparison at one site: • Using five models at the same site to compare the model uncertainty – DNDC, FASSET, PASIM, EPIC, (CENTURY) (M. Wattenbach et al. - in preparation) Cross site comparison one model different sites • PASIM model on different grassland sites in Europe (P. Gottschalk et al. accepted AGEE)

  7. DNDC at Oensingen cropland site DNDC

  8. Cross site PaSim model at European grassland sites

  9. Contribution index: normalized change of standard deviation of NEE 2002 and 2003 [%] 2002 2003 Cross site: PaSim factor importance Oensingen In 2003 Oensingen in 2002 Carlow 2002 Carlow 2003 Laqueuille Ex Easter Bush

  10. Results – cross model comparison 2002 2003

  11. Contribution index normalized change of standard deviation 2002 in %

  12. What we need (ISO ?) • Standardized methods for uncertainty and sensitivity analysis for ecosystem models • Standardized datasets to allow inter-model comparison of uncertainty and sensitivity measures. • Standardized software interfaces for ecosystem models to allow access to databases for model experiments and results. • Databases for model evaluation results to allow scientists, stake-holders and policy maker’s easy access to information of model quality and uncertainty. • To implement the approach we propose a web-based client - server architecture

  13. input factors & sampling schemes Framework design multi-run control central modelling framwork server dataset for comparison model experiment ecosystem model client result database evaluation client & visualization post- processing & visualization model evaluation result database

  14. conclusions • Ecosystem models produce very heterogeneous uncertainties • Results are only meaningful if they are accompanied by uncertainty ranges • Standardisation is necessary to reach inter-comparability • The presented framework approach might be a way to achieve this target

  15. Things I learned recently don’t really understand but think might be important • Presentation of John Norton and Ken Reckhow about “Modelling and Monitoring Environmental Outcomes in Adaptive Management (AM)” IEMSS 2006 Principles: • Design management as continuing trial-and-error learning, in which some variation in system state is valuable because it yields information about the system’s behaviour: “learning by doing” • Compare results of alternative policies, through selected indicators, rather than attempting to optimise some cost function • Include resilience to disturbance as an objective

  16. Things I learned recently, don’t really understand but think might be important • Lyapunov stability ( from Wikipedia) • In mathematics, the notion of Lyapunov stability occurs in the study of dynamical systems. • In simple terms, if all points that start out near a point x stay near x forever, then x is Lyapunov stable. More strongly, if all points that start out near x converge to x, then x is asymptotically stable. • The idea of Lyapunov stability can be extended to infinite-dimensional manifolds, where it is known as structural stability, which concerns the behaviour of different but "nearby" solutions to differential equations.

  17. Thank you !

  18. Necessity of consistency • Changing expectations of models • In the beginning models where made to explore the behaviour of systems for scientific reasons only • Today models are more and more used as predictive tools for risk analysis and as policy support systems • Consequences • We need to understand the predictive capacities and restrictions of our models • We need standardized quality checks to give meaningful uncertainty ranges

  19. tools • Monte Carlo method • Monte Carlo methods:algorithms for solving various kinds of computational problems by using random numbers • Advantage: easy to use • Disadvantage: a lot of model runs • Latin Hypercube sampling: stratified sampling method, which can characterise the population equally well as simple random sampling with a smaller sample size • The tool we are using is Simlab (http://sensitivity-analysis.jrc.cec.eu.int/) a software designed for Monte Carlo based uncertainty and sensitivity analysis • Advantage: easy to use because of the graphical user interface providing a great number of different distributions, sampling methods and parameter interactions • Disadvantage: difficult to integrate external models • Alternative tools and methods to Monte Carlo • University of Sheffield: Gaussian Emulation Machine for Sensitivity Analysis (GEM-SA) see http://www.shef.ac.uk/st1mck/code.html • Advantage: easy to use, fast and efficient and high precision • Disadvantage: problems with thresholds and no time dependants integrated (will be done)

  20. Sources of model uncertainty • Type A and B contributing to Type C - baseline uncertainty: • uncertainty resulting from accuracy and precision of measurements used to determine input factors • Input factors: parameter and variables • Accuracy and precision of the model to represent the processes it is supposed to simulate • Internal parameters: Their accuracy and precision is harder to evaluate as they are often derived parameters (e.g. statistical) based on different measurement methods. In addition they are mostly hard coded.

  21. Sources of model uncertainty • Subset of type B - scenario uncertainty • resulting from the vagueness in scenarios of the future • The input factors are uncertain as they are dependant on unpredictable conditions • The base assumptions of our models may be uncertain because they are based on the current system conditions which may change in future

More Related