1 / 27

A more reliable COSMO-LEPS F. Fundel, A. Walser , M. A. Liniger, C. Appenzeller

A more reliable COSMO-LEPS F. Fundel, A. Walser , M. A. Liniger, C. Appenzeller. COSMO General Meeting,15. September 2008. Outline. Motivation Method Products Verification Sensitivity Studies Conclusion & Outlook. Why calibrate?. OBS-CLEPS/(OBS+CLEPS)/2 1971-2000 LT: 42h.

lee
Download Presentation

A more reliable COSMO-LEPS F. Fundel, A. Walser , M. A. Liniger, C. Appenzeller

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A more reliable COSMO-LEPSF. Fundel, A. Walser, M. A. Liniger, C. Appenzeller COSMO General Meeting,15. September 2008

  2. Outline • Motivation • Method • Products • Verification • Sensitivity Studies • Conclusion & Outlook

  3. Why calibrate? OBS-CLEPS/(OBS+CLEPS)/2 1971-2000 LT: 42h Q0.8 Q0.95 Jan COSMO-LEPS is not reliable, Probabilities might be wrong! Need for calibration Jul

  4. COSMO-LEPS reforecasts (v 4.0) Setup • Period of 30 years (1971-2000) • Deterministic run of COSMO-LEPS (1 member) • 90h lead time, 12:00 initial time (every 3rd day) • ERA40 as initial/boundary conditions • Calculated on hpce at ECMWF • Archived on Mars • Convective scheme = Tiedtke/Kain-Fritsch • Random physics (turlen & patlen) Limitations • New climatology needed with each model version change • Needs time and is costly(present setup ca. 2 Mio SBU/y.)

  5. Reforecasts in literature • “However, the improved skill from calibration using large datasets is equivalent to the skill increases afforded by perhaps 5–10 yr of numerical modeling system development and model resolution increases.” (Wilks and Hamill, Mon. Wea. Rev. 2007) • “Use of reforecasts improved probabilistic precipitation forecasts dramatically, aided the diagnosis of model biases, and provided enough forecast samples to answer some interesting questions about predictability in the forecast model.” (Hamill et. al, BAMS 2006) • “…reforecast data sets may be particularly helpful in the improvement of probabilistic forecasts of the variables that are most directly relevant to many forecast users…” (Hamill and Whitaker, subm. to Mon. Wea. Rev 2006) • „...large improvements in forecast skill and reliability are possible through the use of reforecasts, even with a modernized forecast model. (Hamill et al., subm. Mon Wea. Rev 2007)“

  6. What‘s new? • Calibration of a modern high-resolution LEPS • Return Periods don‘t require observations • Calibration of the entire model domain (in principle every parameter)

  7. „return period“ x „return period“ x Calibration strategy CDF PDF „calibrated return periods“ OBS MOD FCST x „raw return periods“ Quantiles w.r.t. observations are not reliable Quantiles w.r.t model climatology are reliable

  8. Return period based Warnings domain averages apply warnlevels RR time series 3 types of alerts: 65 target regions:

  9. Products Probabilities to exceed return period with respect to the current month: Corresponding return level [mm/24h]

  10. Products Probabilities to exceed return period with respect to the current month:

  11. Products

  12. Products • Obs. quantiles • Obs. • COSMO-7 • Calibrated CLEPS

  13. Verification Observation Data • 24h Tot_Prec (0600-0600 UTC) • Domain Switzerland • interp. on CLEPS Grid (C.Frei) • Apr06 – Aug07 (verification) • 1971-2000 (calibration) model topography [m]

  14. Verification Reliability Diagram:

  15. calibrated raw Winter 06/07 Q0.8 24h precipitation + (18-42h) + (66-90h) • raw forecast overconfident, very limited skill • strong improvement in reliability • long lead-time forecasts more reliable

  16. calibrated raw Summer 06 & 07 Q0.8 24h precipitation + (18-42h) + (66-90h) • raw forecast overconfident • modest improvement of reliability • long lead-time forecasts more reliable

  17. calibrated raw BSSD 24h precipitation winter summer

  18. Economic value Q0.8 24h precipitation4 dayslead-time Winter 06/07 Summer 06&07

  19. BSSD: Rel. skill improvement (Apr 06–Aug 07) Q0.95 Q0.8 Day 1 Day 4 Raw forecast skill < 0

  20. Sensitivity Study: Calibration Period 2000 1971 ….….. 1971 2000 ….….. 1971 2000 ….….. 1971 2000 ….….. 1971 2000 ….….. available reforecasts used reforecasts actual forecast

  21. Sensitivity Study: Calibration Period • short subset possible for the calibration of frequent events • larger subset suggested for the calibration of extreme events • points to relatively weak seasonality of frequent events

  22. Sensitivity Study: Reforecast length 1971 2000 ….….. 2000 ….….. 2000 2000 2000 available reforecasts used reforecasts actual forecast

  23. Sensitivity Study: Reforecast length • 15-20 years seem to be sufficient to calibrate frequent events • extreme events require a large set of reforecasts • Morethan 2 years of reforecastsareneededfor a betterthanuncalibratedforecast

  24. Conclusions • Calibrating with reforecasts improves the forecast skill significantly • Most effective in winter • Beneficial for all users (i.e. all C/L ratios) • Calibrated warnings without using observations are possible • Calibrating frequent precipitation events does not require a large calibration period • However, calibrating extreme events does

  25. Outlook • Optimal use of reforecasts • Publications • Calibration of wind gusts • Operational implementation of • Calibrating & plot generation at CSCS (time-critical) • Reforecasts at ECMWF (non time-critical)

  26. The BSS debiased Weigel et al. 2006, Mon. Wea. Rev. Special case BSS: M: Ensemble size

More Related