1 / 23

Multi Model Ensembles CTB Transition Project Team Report

Multi Model Ensembles CTB Transition Project Team Report. Suranjana Saha, EMC (chair) Huug van den Dool, CPC Arun Kumar, CPC February 2007. TWO STUDIES WERE CONDUCTED USING THE CFS AND EUROPEAN DEMETER DATA TO EVALUATE THE FOLLOWING : How extensive (long) should hindcasts be?

geri
Download Presentation

Multi Model Ensembles CTB Transition Project Team Report

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multi Model EnsemblesCTB Transition Project Team Report Suranjana Saha, EMC (chair) Huug van den Dool, CPC Arun Kumar, CPC February 2007

  2. TWO STUDIES WERE CONDUCTED USING THE CFS AND EUROPEAN DEMETER DATA TO EVALUATE THE FOLLOWING : • How extensive (long) should hindcasts be? • Does the NCEP CFS forecasts add to the skill of the European DEMETER-3 forecasts to produce a viable International Multi Model Ensemble (IMME) ?

  3. How extensive (long) should hindcasts be? Huug van den Dool Climate Prediction Center, NCEP/NWS/NOAA Suranjana Saha Environmental Modeling Center, NCEP/NWS/NOAA

  4. Explained Variance (%) Feb 1981-2001; lead 3 (Nov starts); monthly T2m (US, CD data) Explained Variance=Square of Anom Correlation SEC : Systematic Error Correction; EW: Equal Weights CFS=CFS, USA; EC=ECMWF; PLA=Max Planck Inst, Germany; METF=MeteoFrance, France; UKM=UKMetOffice; INGV=INGV, Italy, LOD=LODYC, France; CERF=CERFACS, France

  5. Anomaly Correlation (%) Feb 1981-2001; lead 3 (Nov starts); monthly T2m (US, CD data) WITH SEC21 WITH SEC8 SEC8-SEC21 Need more years to determine the SEC where/when the inter annual standard deviation is large SEC : Systematic Error Correction

  6. CONCLUSIONS • Without SEC (systematic error correction) there is no skill by any method (for presumably the best month: Feb) • With SEC (1st moment only), there is skill by only a few models (5 out of 8 are still useless) • MME not good when quality of models varies too much • MME3 works well, when using just three good models

  7. CONCLUSIONS (contd) • CFS improves the most from extensive hindcasts (21 years noticeably better than 8) and has the most skill. Other models have far less skill with all years included. • Cross validation (CV) is problematic (leave 3 years out when doing 8 year based SEC?) • Need more years to determine the SEC where/when the inter annual standard deviation is large

  8. 15-member CFS reforecasts 15-member CFS reforecasts

  9. Does the NCEP CFS add to the skill of the European DEMETER-3 to produce a viable International Multi Model Ensemble (IMME) ? Huug van den Dool Climate Prediction Center, NCEP/NWS/NOAA Suranjana Saha and Åke Johansson Environmental Modeling Center, NCEP/NWS/NOAA

  10. DATA USED • DEMETER-3 (DEM3) = ECMWF + METFR + UKMO • CFS • IMME = DEM3 + CFS • 1981 – 2001 • 4 Initial condition months : Feb, May, Aug and Nov • Leads 1-5 • Monthly means

  11. DATA/Definitions USED (contd) • Anomaly Correlation (deterministic) and Brier Score (probabilistic) • Ensemble Mean and PDF • T2m and Prate • Europe and United States Verification Data : • T2m : Fan and van den Dool • Prate : CMAP “ NO consolidation, equal weights, NO Cross-validation “

  12. BRIER SCORE FOR 3-CLASS SYSTEM 1. Calculate tercile boundaries from observations 1981-2001 (1982-2002 for longer leads) at each gridpoint. 2. Assign departures from model’s own climatology (based on 21 years, all members) to one of the three classes: Below (B), Normal (N) and Above (A), and find the fraction of forecasts (F) among all participating ensemble members for these classes denoted by FB, FN and FA respectively, such that FB+ FN+FA=1 . 3. Denoting Observations as O, we calculate a Brier Score (BS) as : BS={(FB-OB)**2 +(FN-ON)**2 + (FA-OA)**2}/3, aggregated over all years and all grid points. {{For example, when the observation is in the B class, we have (1,0,0) for (OB, ON, OA) etc.}} 4. BS for random deterministic prediction: 0.444 BS for ‘always climatology’ (1/3rd,1/3rd,1/3rd) : 0.222 5. RPS: The same as Brier Score, but for cumulative distribution (no-skill=0.148)

  13. Number of times IMME improves upon DEM-3 :out of 20 cases (4 IC’s x 5 leads): “The bottom line”

  14. Frequency of being the best model in 20 casesin terms ofAnomaly Correlation of the Ensemble Mean “Another bottom line”

  15. Frequency of being the best model in 20 casesin terms ofBrier Score of the PDF “Another bottom line”

  16. Frequency of being the best model in 20 casesin terms ofRanked Probability Score (RPS) of the PDF “Another bottom line”

  17. CONCLUSIONS • Overall, NCEP CFS contributes to the skill of IMME (relative to DEM3) for equal weights. • This is especially so in terms of the probabilistic Brier Score • And for Precipitation • When the skill of a model is low, consolidation of forecasts (based on a-priori skill estimates) will reduce the chance that this model will be included in the IMME, and thus may lead to improvements in the skill of the IMME as obtained from equal weighting

  18. CONCLUSIONS (Contd) In comparison to ECMWF, METFR and UKMO, the CFS as an individual model does: • well in deterministic scoring (AC) for Prate • very well in probability scoring (BS) for Prate and T2m over both domains of USA and EUROPE.

  19. CONCLUSIONS (Contd) • The weakness of the CFS is in the deterministic scoring (AC) for T2m (which is near average of the other models) over both EUROPE and USA • While CFS contributes to IMME, it is questionable whether all other models contribute to CFS.

  20. CONCLUSIONS (Contd) • Skill (if any) over EUROPE or USA is very modest for any model, or any combination of models. The AC for the ensemble mean gives a more “positive” impression about skill than the Brier Score, which rarely improved over climatological probabilities in this study

  21. EUROPEAN IMME • UPDATE • RESULTS OF THIS STUDY WERE SENT TO THE ECMWF. • THE DIRECTOR, ECMWF SHOWED INTEREST, BUT WANTED HIS OWN SCIENTISTS TO CARRY OUT THE EVALUATION. • DR. DOBLAS-REYES (ECMWF) HAS DOWNLOADED THE CFS RETROSPECTIVE DATA FROM THE CFS SERVER AND IS IN THE PROCESS OF EVALUATING THE IMME, BUT USING THEIR LATEST EUROSIP DATA (INSTEAD OF THE DEMETER DATA). • RISKS • THE EUROPEANS MAY WELL WANT TO KEEP THEIR MME EUROPEAN. • THEIR OPERATIONAL SEASONAL FORECAST PRODUCTS ARE NOT RELEASED IN REAL TIME (ONLY TO MEMBER STATES). • BILATERAL AGREEMENTS MAY HAVE TO BE MADE TO OBTAIN THESE IN REAL TIME FOR ANY OPERATIONAL USE IN AN IMME WITH THE CFS.

  22. OTHER COUNTRIES IN IMME • UPDATE • BMRC, AUSTRALIA • THE AUSTRALIANS ARE IN THE PROCESS OF COMPLETING THE RETROSPECTIVE FORECASTS WITH THEIR COUPLED MODEL. • WHEN THESE FORECASTS ARE COMPLETED, A SIMILAR STUDY WILL BE CONDUCTED TO EVALUATE WHETHER THE AUSTRALIAN MODEL FORECASTS WILL BRING ADDITIONAL SKILL TO THE CFS FORECASTS. • BCC, BEIJING, CHINA • A SIMILAR SITUATION PERTAINS TO THE CHINESE METEOROLOGICAL AGENCY. WHEN THEY HAVE COMPLETED THE RETROSPECTIVE FORECASTS WITH THEIR COUPLED MODEL, WE WILL EVALUATE WHETHER THE CHINESE MODEL FORECASTS WILL BRING ADDITIONAL SKILL TO THE CFS FORECASTS.

  23. NATIONAL MME • UPDATE • GFDL • HINDCAST DATA HAS BEEN OBTAINED FOR 4 INITIAL MONTHS (APR, MAY, OCT, NOV) FROM GFDL. THIS DATA IS BEING PROCESSED AND TRANSFERRED TO NCEP GRIDS FOR COMPARISON AND INCLUSION IN A MME WITH THE CFS. • NASA • NOT READY TO START THEIR HINDCASTS • NCAR • NOT READY TO START THEIR HINDCASTS. • BEN KIRTMAN (COLA) HAS DONE A FEW HINDCASTS WITH THE NCAR MODEL WHICH SHOW PROMISE. A FULL HINDCAST NEEDS TO BEDONE FOR EVALUATION IN A MME WITH THE CFS.

More Related