1 / 29

Operational Seasonal Forecast Systems: a view from ECMWF

Operational Seasonal Forecast Systems: a view from ECMWF. Tim Stockdale The team: Franco Molteni, Magdalena Balmaseda, Kristian Mogensen, Frederic Vitart, Laura Ferranti European Centre for Medium-Range Weather Forecasts. Outline. Operational seasonal systems at ECMWF

hagemann
Download Presentation

Operational Seasonal Forecast Systems: a view from ECMWF

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Operational Seasonal Forecast Systems: a view from ECMWF Tim Stockdale The team: Franco Molteni, Magdalena Balmaseda, Kristian Mogensen, Frederic Vitart, Laura Ferranti European Centre for Medium-Range Weather Forecasts

  2. Outline • Operational seasonal systems at ECMWF • System 3 - configuration • System 3 – products • System 3 – skill measures • EUROSIP • ECMWF, Met Office and Météo-France multi-model system • Some (relevant) issues in seasonal prediction • Estimating skill and model improvement • Cost effective systems • Multi-model systems, data sharing policy

  3. Sources of seasonal predictability • KNOWN TO BE IMPORTANT: • El Nino variability - biggest single signal • Other tropical ocean SST - important, but multifarious • Climate change - especially important in mid-latitudes • Local land surface conditions - e.g. soil moisture in spring • OTHER FACTORS: • Volcanic eruptions - definitely important for large events • Mid-latitude ocean temperatures - still somewhat controversial • Remote soil moisture/ snow cover - not well established • Sea ice anomalies - local effects, but remote? • Dynamic memory of atmosphere - most likely on 1-2 months • Stratospheric influences - solar cycle, QBO, ozone, … • Unknown or Unexpected - ???

  4. ECMWF operational seasonal forecasts • Real time forecasts since 1997 • “System 1” initially made public as “experimental” in Dec 1997 • System 2 started running in August 2001, released in early 2002 • System 3 started running in Sept 2006, operational in March 2007 • Burst mode ensemble forecast • Initial conditions are valid for 0Z on the 1st of a month • Forecast is created typically on the 11th/12th (SST data is delayed up to 11 days) • Forecast and product release date is 12Z on the 15th. • Range of operational products • Moderately extensive set of graphical products on web • Raw data in MARS • Formal dissemination of real time forecast data

  5. ECMWF System 3 – the model • IFS (atmosphere) • TL159L62 Cy31r1, 1.125 deg grid for physics (operational in Sep 2006) • Full set of singular vectors from EPS system to perturb atmosphere initial conditions (more sophisticated than needed …) • Ocean currents coupled to atmosphere boundary layer calculations • HOPE (ocean) • Global ocean model, 1x1 mid-latitude resolution, 0.3 near equator • A lot of work in developing the OI ocean analyses, including analysis of salinity, multivariate bias corrections and use of altimetry. • Coupling • Fully coupled, no flux adjustments, except no physical model of sea-ice

  6. System 3 configuration • Real time forecasts: • 41 member ensemble forecast to 7 months • SST and atmos. perturbations added to each member • 11 member ensemble forecast to 13 months • Designed to give an ‘outlook’ for ENSO • Only once per quarter (Feb, May, Aug and Nov starts) • November starts are actually 14 months (to year end) • Back integrations from 1981-2005 (25 years) • 11 member ensemble every month • 5 members to 13 months once per quarter

  7. Other operational plots for DJF 2010/11

  8. Tropical storm forecasts

  9. .. but ensemble spread (dashed lines) is still substantially less than actual forecast error. Performance – SST and ENSO Rms error of forecasts has been systematically reduced (solid lines) ….

  10. More recent SST forecasts are better .... 1994-2007 1981-1993

  11. At longer leads, model spread starts to catch up

  12. How good are the forecasts? Deterministic skill: DJF ACC Temperature: actual forecasts Temperature: perfect model

  13. How good are the forecasts? Deterministic skill: DJF ACC Precip: actual forecasts Precip: perfect model

  14. How good are the forecasts? Probabilistic skill: Reliability diagrams Tropical precip < lower tercile, JJA NH extratrop temp > upper tercile, DJF

  15. How good are the forecasts? Probabilistic skill: Reliability diagrams Europe: Temp > upper tercile, DJF

  16. EUROSIP

  17. Single model Multi- model

  18. 0.039 0.899 0.141 0.095 0.926 0.169 -0.001 0.877 0.123 0.039 0.899 0.140 0.204 0.990 0.213 0.047 0.893 0.153 0.065 0.918 0.147 -0.064 0.838 0.099 multi-model DEMETER: multi-model vs single-model BSS Rel-Sc Res-Sc Reliability diagrams (T2m > 0) 1-month lead, start date May, 1980 - 2001 Hagedorn et al. (2005)

  19. Some (relevant) issues in seasonal prediction

  20. Tentative results from ECMWF S4 (11 members, 20 years) System 3 Cy36r4 - T159L62

  21. Alternate stochastic physics 0.346 vs 0.294 A real improvement, now scoring better than S3 T159L91, plus revised stratospheric physics Only 5 members, but score of 0.342 is much better than L62

  22. T255L91 Score is now 0.390, cf 0.294 for T159L62 T255L91, with alternate stochastic physics Score is 0.273 From the best to the worst! (Also other fields)

  23. Possible interpretations • Statistical testing suggests differences are real, for this 20 year period • Different model configurations give different model “signals” in NH winter • Hope was that hemispheric averaging would increase degrees of freedom enough to make scores meaningful • Hypothesis 1: this is not true - a given set of signals gets a given score for the 20 year period, but this is of no relevance to expected model skill in the future, and cannot be used for model selection. • Hypothesis 2: Some model configurations really do better capture the “balance” of processes affecting NH winter circulation, even if it is via compensation of errors. Better to choose the model with the better score.

  24. Choosing a model configuration • Encouraging that some configurations give good results • Higher horizontal and vertical resolution are consistently positive • Model climate is much improved, again resolution clearly helps • Forecast skill?? • How should we weight seasonal forecast skill? • What other tests should we use for a model? Links to extended/monthly forecast range??

  25. Cost effective systems • Back integrations dominate total cost of system • System 3: 3300 back integrations (must be in first year) • 492 real-time integrations (per year) • Back integrations define model climate • Need both climate mean and the pdf, latter needs large sample • May prefer to use a “recent” period (30 years? Or less??) • System 2 had a 75 member “climate”, System 3 has 275. • Sampling is basically OK • Back integrations provide information on skill • A forecast cannot be used unless we know (or assume) its level of skill • Observations have only 1 member, so large ensembles are much less helpful than large numbers of cases. • Care needed eg to estimate skill of 41 member ensemble based on past performance of 11 member ensemble • For regions of high signal/noise, System 3 gives adequate skill estimates • For regions of low signal/noise (eg <= 0.5), need hundreds of years

  26. Data policy and exchange issues • Present data policy • In Europe, constrains the free distribution/exchange of seasonal forecast data • Policy is not fixed in stone, and may evolve over time • Science • Want to make sure that scientific studies are hindered as little as possible • CHFP is main research project on seasonal prediction; data policy has been OK, resources for data exchange were long a sticking point. • High level support for new projects may be helpful • Real-time forecasts • Some data can be used by /supplied to WMO • Need to ensure that it is enough • Need to ensure that important “public good” applications are supported

  27. Conclusions • Seasonal prediction still exciting and challenging • Mid-latitude skill and reliability still need much work • Higher resolution seems helpful • Testing/assessing/selecting models needs to cut across timescales • Coordinated experimentation has potential to be valuable, beyond CHFP • Careful design will make it easier for operational centre’s to participate

More Related