1 / 40

Andreas Weigel, Mark Liniger, Christof Appenzeller

Weighting multi-models in seasonal forecasting … and what can be learnt for the combination of RCMs. 6 May 2009 2nd Lund Regional-scale Climate Modelling Workshop Lund, Sweden. Andreas Weigel, Mark Liniger, Christof Appenzeller. Outline. A few general remarks about seasonal forecasting

trang
Download Presentation

Andreas Weigel, Mark Liniger, Christof Appenzeller

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Weighting multi-models in seasonal forecasting …and what can be learnt for the combination of RCMs 6 May 2009 2nd Lund Regional-scale Climate Modelling Workshop Lund, Sweden Andreas Weigel, Mark Liniger, Christof Appenzeller

  2. Outline • A few general remarks about seasonal forecasting • Why can multi-models be useful? • How can multi-models be weighted? • What are the problematic issues? • “Subjective” outlook: What can be learnt for the combination of RCMs?

  3. Climate projections • Tendency of climate characteristics (e.g. mean state) under the influence of changing boundary conditions -> Conceptually similar • Multidecadal climate projections: • Boundary condition: Radiative forcing -> Derived from given emission scenarios • Seasonal forecasting • SST, Soil moisture, Snow cover, … -> Derived from an observed initial state

  4. Forecast 1 Observation 1 Score 1 Score 2 Forecast 2 Observation 2 Score 3 Forecast 3 Observation 3 Forecast N Observation N Score N Skill Score Verification RPSS (Ranked probability skill score) Probabilistic generalization of MSE (mean square error) Measures degree to which forecasts outperform random guessing

  5. System 3 of ECMWF • Prediction range 7 months • Resolution: approx. 120 km • Reforecasts back to 1981 • Coupled ocean-atmosphere GCM • Every month: Initialization from global network of measurements

  6. Probability forecasts To capture the range of uncertainties, seasonal climate models are run many times (41) from slightly perturbed initial conditions ENSEMBLE technique Look at distribution rather than single values: PROBABILITYFORECASTS

  7. Why should we use multi-models ?

  8. A conceptual view Climatology Weigel et al, 2009, Mon. Wea. Rev., in press

  9. unpredictable noise predictable signal A conceptual view m Climatology Weigel et al, 2009, Mon. Wea. Rev., in press

  10. unpredictable noise predictable signal A conceptual view m Climatology Weigel et al, 2009, Mon. Wea. Rev., in press

  11. A conceptual view m Climatology 1, 2, …, M ~ x Weigel et al, 2009, Mon. Wea. Rev., in press

  12. A conceptual view m Climatology 1, 2, …, M ~ x OVERCONFIDENCE Weigel et al, 2009, Mon. Wea. Rev., in press

  13. A conceptual view eb m Climatology 1, 2, …, M ~ x OVERCONFIDENCE Weigel et al, 2009, Mon. Wea. Rev., in press

  14. Overconfidence in real forecasts Fcst 1 Aug 2008 Fcst 1 Nov 2007

  15. Multimodels Two kinds of uncertainties: Initialization Model error Ensembles

  16. Multimodels Two kinds of uncertainties: Initialization Model error Ensembles Multimodels Success demonstrated in many studies..... (Hagedorn et al. 2005, Doblas-Reyes et al. 2005, Palmer et al. 2004, Weigel et al. 2008, …)

  17. Combining overconfident models Combination of overconfident forecasts Weigel et al, 2008, Quart. J. Roy. Met. Soc

  18. “Single model” versus “Multimodel” Fcst 1 Aug 2008 ECMWF + UK Met Office + Météo-France ECMWF

  19. Combination of synthetic forecasts overconfident averages of 1000 experiments 1 Weigel et al, 2008, Quart. J. Roy. Met. Soc

  20. Combination of synthetic forecasts reliable overconfident averages of 1000 experiments 1 Weigel et al, 2008, Quart. J. Roy. Met. Soc

  21. RPSSD 0 -0.6 0.6 0.4 -0.4 0.2 -0.2 MULTI-MODEL (“SIMPLE”) Multi-models (forecasts of summer T2) ECMWF UKMO

  22. Weighted multi-models

  23. Weighted Multimodels • Idea: Use past forecasts and observations to determine an optimum weighting (by optimizing a suitable skill metric) • Suitable metric: IGNORANCE “Information deficit of a user (measured in bits), who is in possession of a probability forecast p, but does not yet know the true outcome” (Roulston and Smith 2002) From information theory: IGN = -log2(p) p = 100 % => IGN = 0 p = 50 % => IGN = 1 bit p = 0 % => IGN = 

  24. Weighted Skill of multimodels (ECMWF + UKMO) RPSSd 0 -0.6 0.6 Unweighted DEMETER, T2 JJA Initialisation 1 May (in cross-validation) Weigel et al, 2008, Quart. J. Roy. Met. Soc

  25. Robustness Problematic issues in the context of model weighting

  26. Toy model experiments Combining two (overconfident) models with different model errors bad good

  27. Toy model experiments Combining two (overconfident) models with different model errors bad good

  28. Toy model experiments Combining two (overconfident) models with different model errors bad good

  29. Toy model experiments single models very different single models very similar better than equal weighting worse than equal weighting average over 100,000 experiments

  30. Robustness • Combining forecasts from 2 models (UKMO, ECMWF) from the DEMETER data base grid-pointwise • Determine average global skill (in % RPSS)

  31. Robustness Representativeness Problematic issues in the context of model weighting

  32. Robustness Representativeness Intrinsic unreliability Problematic issues in the context of model weighting

  33. Intrinsic unreliability • “Accuracy” of probabilistic forecasts decreases, as the number of ensemble members decreases Weigel et al, 2007a, Mon. Wea. Rev.

  34. Intrinsic unreliability • “Accuracy” of probabilistic forecasts decreases, as the number of ensemble members decreases • White noise toy model • No skill by construction Weigel et al, 2007a, Mon. Wea. Rev.

  35. Intrinsic unreliability • Weigthing reduces the effective ensemble size Weigel et al, 2007b, Mon. Wea. Rev.

  36. Conclusions

  37. Summary • Seasonal forecasts are “climate” predictions • Advantage: Can verify those variables we actually want to predict • The success of multi-model combination can be demonstrated by verification and reproduced with toy model simulations. It can be understood by conceptual considerations. • Weighting multi-models can further improve the prediction skill and reduce the forecast ignorance, at least if the models differ in their skill • Using wrong weights can be harmful !

  38. Reliability of weights: seasonal forecasts  

  39. Reliability of weights: climate scenarios

  40. Final conclusions • Combination of climate change projections much more difficult than for seasonal forecasts • There are many problematic issues at the moment which have not yet been sufficiently addressed • But: I think many of these issues can be addressed - with the data and methods at hand! • These issues should be addressed and further explored, given that weighting in principle can improve the forecasts

More Related