1 / 37

An evaluation of the probabilistic information in multi-model ensembles

An evaluation of the probabilistic information in multi-model ensembles. David Unger Huug van den Dool , Malaquias Pena, Peitao Peng, and Suranjana Saha 30 th Annual Climate Prediction and Diagnostics Workshop October 25, 2005. Objective.

arden
Download Presentation

An evaluation of the probabilistic information in multi-model ensembles

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An evaluation of the probabilistic information in multi-model ensembles David Unger Huug van den Dool , Malaquias Pena, Peitao Peng, and Suranjana Saha 30th Annual Climate Prediction and Diagnostics Workshop October 25, 2005

  2. Objective • Produce a Probability Distribution Function (PDF) from the ensembles. Challenge • Calibration • Account for skill • Retain information from ensembles (Or not if no skill)

  3. Schematic illustration Temperature

  4. σ

  5. Schematic example σz Temperature (F)

  6. Kernel vs. Mean

  7. Regression Step 1. Standardization Step 2. Skill Adjustment ^ Step 3. Make the forecast ^ ^

  8. ______ σe= σc√ 1-Rm2 σe

  9. Analysis of Ensemble Variance V = Variance Total = Explained + Unexplained Variance Variance Variance σc2 = Ve + Vu σc2 = σFm2+ σe2 σc2 = σFm2+ (E2 + σz2) ^ ^

  10. Analysis of Ensemble Variance (Continued) With help of some relationships commonly used in linear regression: <E2 > = (Rm2 - Ri2) σc2 σz2 = (1-2Rm2 + Ri2) σc2 Rz2 = 2Rm2 - Ri2 Rz≤ 1. ^

  11. Ensemble Calibration Step 1. Standardization Step 2. Ensemble Spread Adjustment Zi’ = K(Zi - Zm) + Zm Step 3. Skill Adjustment Zi = Rz Zi ’ ^ Step 4. Make the Forecast ^ ^

  12. Schematic example σz Temperature (F)

  13. Rz=.97, Rfm=.94, Ri=.90

  14. Rz=.93, Rfm=.87, Rf=.30

  15. Rz=.85, Rfm=.67, Ri=.41

  16. Rz=.62, Rfm=.46, Rf=.20

  17. Weighting

  18. Wgts: 50% Ens. 1, 17%Ens 2, 3, 4

  19. Real time system • Time series estimates of Statistics. • Exponential filter FT+1 = (1-α)FT + α fT+1 • Initial guess provided from 1956-1981 CA statistics

  20. Continuous Ranked Probability Score

  21. Some Results • Nino 3.4 SSTs Operational system 15 CFS ,12 CA, 1 CCA , 1 MKV • Demeter Data 9 CFS, 12 CA, 1 CCA, 1 MKV 9 UKM, 9 MFR, 9 MPI, 9 ECM, 9 ING, 9 LOD, 9 CER,

  22. Nino 3.4 SSTs

  23. Nino 3.4 SST5-month lead by initial time 1982-2001

  24. CRPSS – Nino 3.4 SSTs All Initial times 1982-2001

  25. CRPSS – Nino 3.4 SSTs All Initial times 1990-2001 (Independent)

  26. CRPSS – Nino 3.4 SSTs 5-month Lead, All Initial times 1990-2001 (Independent data)

  27. Reliability Nino 3.4 SST (1990-2001)

  28. U.S. Temperature and Precipitation Consolidation • 15 CFS • 1 CCA • 1 SMLR Trends are removed from models Statistics and distribution are computed Trend added to end result.

  29. Trend Problem • Should a skill mask be applied? How much? - This technique requires a quantitative estimate of the trend. • Component models sometimes “learn” trends, making bias correction difficult. – Doubles trends. • Errors in estimating high frequency model forecasts

  30. U.S. T and P consolidation Skill Mask on Trends No Skill Mask on Trends

  31. CRPS and RPS-3 (BNA) Skill Scores: Temperature Skill .15 .07 .02 CRPS RPS Cons 1-Month Lead, All initial times Ensm

  32. Reliability U. S. Temperatures(1995-2003)

  33. CRPS and RPS-3 (BNA) Skill Scores: Precipitation Skill .10 .05 .01 CRPS RPS Cons 1-Month Lead, All initial times Ensm

  34. Reliability U.S. Precipitation(1995-2003)

  35. Conclusions • Calibrated ensemble and ensemble means score very closely (by CRPS) Calibrated ensembles seem to have a slight edge. • No penalty for including many ensembles (but not much benefit either) • Considerable penalty for including less skillful ensembles – Weighting is critical. • Probabilistic predictions are reliable (when looked at in terms of a continuous PDF)

  36. Conclusions (Continued) • Calibrated ensembles tend to be slightly overconfident • Trends are a major problem – and are outside the realm of consolidation (but they are critically important for seasonal temperature forecasting).

  37. 6-10 day Forecasts (based on Analogs)

More Related