1 / 30

Review of the May 2006 Basic Product Implementation

Review of the May 2006 Basic Product Implementation. Bo Cui 1 , Yuejian Zhu 2 , Zoltan Toth 2 , Richard Verret 3 , St é phane Beauregard 3 and Richard Wobus 1 1 SAIC at Environmental Modeling Center, NCEP/NWS 2 Environmental Modeling Center, NCEP/NWS

bruce-sosa
Download Presentation

Review of the May 2006 Basic Product Implementation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Review of the May 2006 Basic Product Implementation Bo Cui1, Yuejian Zhu2, Zoltan Toth2, Richard Verret3, Stéphane Beauregard3 and Richard Wobus1 1SAIC at Environmental Modeling Center, NCEP/NWS 2Environmental Modeling Center, NCEP/NWS 3Canadian Meteorological Centre, Meteorological Service of Canada Acknowledgements Poulin Lewis CMC/MSC Dingchen Hou EMC/NCEP/NWS/NOAA David Unger CPC/NCEP/NWS/NOAA David Michaud, Brent Gorden, Luke Lin NCO/NCEP/NWS/NOAA

  2. First Operational Implementation of NAEFS • Bias corrected members of joint MSC-NCEP ensemble • Decaying accumulated bias (~past 50 days) for each var. for each grid point • For selected 35 of 50 NAEFS variables • 32(00Z), 15(06Z), 32(12Z) and 15(18Z) joint ensemble members • Bias correction against each center’s own operational analysis • Weights for each member for creating joint ensemble (equal weights now – unequal weights to be added later) • Weights don’t depend on the variables • Weights depend on geographical location (low precision packing) • Weights depend on the lead time • Climate anomaly percentiles for each member • Based on NCEP/NCAR 40-year reanalysis • Used first 4 Fourier modes for daily mean, • Estimated climate pdf distribution (standard deviation) from daily mean • For selected 19 of 50 NAEFS variables • 32(00Z), 15(06Z), 32(12Z) and 15(18Z) joint ensemble members • Adjustment made to account for difference between oper. & re-analysis • Provides basis for downscaling if local climatology available • Non-dimensional unit

  3. RAW & BASIC PRODUCT AVAILABILITY 2005, 2006, 2007, 2008

  4. List of Variables for Bias Correction, Weightsand Forecast Anomalies for CMC & NCEP Ensemble

  5. Bias Correction Method & Application • Bias Assessment: adaptive (Kalman Filter type) algorithm decaying averaging mean error = (1-w) * prior t.m.e + w * (f – a) • For separated cycles, each lead time and individual grid point, t.m.e = time mean error 6.6% • Test different decaying weights. • 0.25%, 0.5%, 1%, 2%, 5% and • 10%, respectively • Decide to use 2% (~ 50 days) • decaying accumulation bias • estimation 3.3% 1.6% Toth, Z., and Y. Zhu, 2001 • Bias Correction: application to NCEP operational ensemble 15 members

  6. Comparison of Different Decaying Weights

  7. NAEFS Implementation Testing • Period: • 04/10/2006 – Current (NCO real time parallel) • Maps comparison for bias (before and after) • 500hPa height, 2m temperature • Statistics for • Bias reduction in percentage • Height, temperature, winds • RMS errors • Probabilistic verifications (ROC, RPSS) • NH, SH and tropic • Conclusions • Bias reduced (approximately 50% at early lead time) • RMS errors improved by 9% for d0-d3 • Probabilistic forecast • Improved for all area, all lead time • Typically for NH, 20-24 hours improvement from d7

  8. 500hPa height: 120 hours forecast (ini: 2006043000) Shaded: left – raw bias right – bias after correction

  9. 2 meter temperature: 120 hours forecast (ini: 2006043000) Shaded: left – raw bias right – bias after correction

  10. Bias Improvement (absolute value) after Bias correction Overall bias reduction: (globally) D0-3: 50+% D3-8: 40% D8-15: 30% 500hPa height 850hPa temperature There is daily variation after bias correction, more bias reduced for valid 12Z cycle Sea level pressure 2m Temperature

  11. Bias Improvement (absolute value) after Bias correction Overall bias reduction for Northern Hemisphere Bias – before/after bias correction

  12. Bias Improvement (absolute value) after Bias correction 10m V-component 10m U-component Overall bias reduction: (Tropic) D0-3: 50% D3-8: 45% D8-15: 40% Sea level pressure 2m temperature

  13. RPSS before/after bias correction NAEFS improvement Close to 2-day extension of skill with first NAEFS implementation RPSS improvement in 4 yrs 1.5-day extension of skill in 4 yrs

  14. 1.5-day extension of skill in 4 yrs Close to 2-day extension of skill with first NAEFS implementation

  15. NAEFS Performance Review Improvement in Ensemble Forecasts Requirement Threshold Actual 25Apr-10May06 Variance Ensemble Mean 3-14 Day Lead Time Bias Reduction (%) 50% 30-70% Met or exceeded in Tropics & up to D3 elsewhere; slightly below otherwise RMS Error Reduction (%) 10% Up to 10% Met up to D3, below expected D4 and beyond Improvement in Ensemble-based Probabilistic Forecasts 3 Day 6 Hours 12 hrs Exceeded 7 Day 12 Hours 16 hrs Exceeded 10 – 14 Days 24 Hours 48 hrs Exceeded Appendix 6 KEY PERFORMANCE MEASURES

  16. COMPUTATION OF CLIMATE ANOMALIES • Apply procedure on each ensemble member • 19 selected bias-corrected variables • Adjust bias-corrected forecast to look like reanalysis • Use standard Kalman-filter type bias-correction algorithm • Evaluate systematic difference between • CDAS analysis (2.5x2.5 grid) and • Operational analyses (1x1 grid) • Remove systematic difference from bias-corrected forecast (2.5x2.5) • Compare adjusted bias-corrected forecast to reanalysis climate pdf • Represent climate distribution by parametric pdf (2-3 vars) • Determine climate percentile corresponding to forecast value

  17. COMPUTATION OF CLIMATE ANOMALIES Ensemble pdf Climate pdf Ensemble members in climate percentile 1 20 Climate percentile 10 50 80 90 99 Ensemble members 260 265 270 275 280 300 305 285 290 295 Temperature

  18. Black-near Washington DC Daily climatological mean Green – near Ottawa Daily climatological standard deviation

  19. Example for one ensemble member, no analysis bias correction

  20. ENSEMBLE 10-, 50- (MEDIAN) & 90-PERCENTILE FORECAST VALUES (BLACK CONTOURS) AND CORRESPONDING CLIMATE PERCENTILES (SHADES OF COLOR) Example of percentile forecast in terms of climate percentiles Proposal future NDGD products

  21. Note of Bias Correction Implementation • Tmax and Tmin Bias Correction • No Tmax and Tmin in analysis data. Average the bias estimation of the 2m temperatures at the beginning and the end of each 6-h period and apply this averaged bias to the Tmin and Tmax valid for that 6-h period. • Current Timings and Availability of CMC Data • Forecast • 00Z forecasts: 7:30 GMT ( 3:30 EST ) • 12Z forecasts: 19:30 GMT ( 15:30 EST ) • Analysis: • 00Z analysis: 12:10 GMT ( 8:10 EST ) • 06Z analysis: 14:40 GMT ( 10:40 EST ) • 12Z analysis: 23:25 GMT ( 19:25 EST ) • 18Z analysis: 02:45 GMT ( 22:45 EST ) • Problem and Current Solution • A long lag time for the analysis data, can’t generate forecast and analysis pgrba data of the same cycle simultaneously • Use analysis one day early to update the bias estimation • Disc usage • NCEP parallel: MSC parallel • 5.4 Gb/day pgrba_an 4.9 Gb/day pgrba_an • 13. Gb/day pgrba_bc 12. Gb/day pgrba_bc • 0.062. Gb/day pgrba_wt 0.070. Gb/day pgrba_wt • Total of ~36 Gb/day .

  22. Background

  23. Evaluation after bias correction (16 cases) Probabilistic skill Extended 20-h for d-7 Northern Hemisphere Southern Hemisphere Black-operational ensemble (10m) Red-real time parallel ensemble (14m) Green-real time parallel ensemble after bias correction (14m) RMS errors for ensemble mean reduced for 48-h forecast (~9%) Tropics

  24. RPSS before/after bias correction RPSS performance for past 5 years

  25. Raw, Optimal & Actual Bias Corrected Ensembles Annual Mean RPSS ( 20040301 – 20050228 ) 500 mb Height over Northern Hemisphere • Decaying average bias correctionimproves RPSS for all lead time vs. raw oper. ens. • Climate error removed bias corrected reforecast gains significant improvementfor all lead time vs. raw reforecast 3 OPR ENS. • Operational vs. reforecast ens. oper. fcst is better than the bias-corrected reforecast out to 9-10 days. Beyond 10 days, bias-corrected reforecast becomes competitive to or better than oper. fcst 3 RFC ENS. • Sign of improving larger for CDC reforecast

  26. RMS errors before/after bias correction Opr. ROC scores for past 5 years

  27. Raw, Optimal & Actual Bias Corrected Ensembles RPSS of 850 mb Temperature Northern Hemisphere, 2004 Summer • Decaying average oper. ens. with bias correctionhas better performance than the raw fcst. 3 OPRR ENS. • Climate error removed bias corrected reforecast gains significant improvementfor most lead time vs. raw reforecast 3 RFC ENS. • Operational vs. reforecast ens. both the raw and post-processed oper. fcst. are better than the bias-corrected reforecast

  28. Raw, Optimal & Actual Bias Corrected Ensembles RPSS of 2m Temperature Northern Hemisphere, Annual Mean Average For 20040301 – 20050228 • Raw operational ens. abrupt drop of RPSS around day 7 is caused by model configuration change at 180h, resolution from 1º to 2.5 º • Decaying average gives a pretty good bias correction as compared to the verifying analysis

  29. Preliminary Results • 1.Decaying averaging ( 2% weight, ~46-day oper. training data): • Short range: Works very well, all measures improved (~Day 5) • Week 2: Limited success • Improves probabilistic performance (i.e., RPSS, outlier stats ) • 2. Climatological mean error removed (25-yr CDC training data): • RMS and PAC: Very limited improvement • Probabilistic measures (RPSS, etc): significant gain • 3. Bias correction algorithm • Use of most recent data better out to ~ 5 days • Use of large sample work for wk2 • 4.Operational vs. reforecast performances: • Ensemble mean: Operational much better than CDC hindcast • CDC has ~50% larger initial error • Probabilistic scores: Operational much better for out to day 10 • For some measures, CDC hindcasts better beyond day 10 • 5. “Hybrid” system ( large reforecast archive & most recent operational data) • No major improvement

  30. Tentative Conclusion 1. Adaptive, regime dependent bias correction works well for first few days (almost as good as “optimal”) • Frequent updates of analysis/modeling system possible 2. Climate mean bias correction can add value, especially for wk2 prob. fcsts • Short range: no need of large hind-cast data set • Generation of large hind-cast ensemble is expensive but can be helpful for extended range fcst • Take into account the upgraded analysis/modeling system when designing the reforecast experiment 3.Hybrid system tested • May not be helpful if 2 systems are dissimilar?

More Related