1 / 29

Local Bayesian Model Averaging for the UW ProbCast

Local Bayesian Model Averaging for the UW ProbCast. Eric P. Grimit, Jeffrey Baars, Clifford F. Mass University of Washington, Atmospheric Sciences. Patrick Tewson University of Washington, Applied Physics Laboratory. Research supported by: Office of Naval Research

kiele
Download Presentation

Local Bayesian Model Averaging for the UW ProbCast

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Local Bayesian Model Averagingfor the UW ProbCast Eric P. Grimit, Jeffrey Baars, Clifford F. Mass University of Washington, Atmospheric Sciences Patrick Tewson University of Washington, Applied Physics Laboratory Research supported by: Office of Naval Research Multi-Disciplinary University Research Initiative (MURI)

  2. Spring MURI Meeting; Seattle, WA Motivation “As high as 81! Hey, Eric! Those intervals are too wide! And 15% chance of precip? Hmm…”

  3. Spring MURI Meeting; Seattle, WA Summary from Last Fall • Mean error climatology (MEC): • Ensemble-mean + its error variance over some history. • Good benchmark to evaluate competing calibration methods. • Generally beats the raw ensemble, even though it is not a state-dependent forecast of uncertainty. • Local Bayesian model averaging (Local-BMA): • Model forecast performance varies locally: • BMA parameters should depend on grid point location. • Train BMA using elevation, land-use, and proximity constraints. • Can consistently beat MEC in tests with grid-based verification.

  4. Spring MURI Meeting; Seattle, WA calibration Probability integral transform (PIT) histograms  an analog of verification rank histograms for continuous forecasts FIT BMA MEC Global-BMA Calibration and Sharpness [00 UTC Cycle; October 2002 – March 2004; 361 cases] sharpness Global- BMA MEC

  5. Spring MURI Meeting; Seattle, WA Local-BMA Calibration and Sharpness calibration Probability integral transform (PIT) histograms  an analog of verification rank histograms for continuous forecasts BMA FIT MEC [00 UTC Cycle; October 2002 – March 2004; 361 cases] sharpness Local- BMA MEC

  6. Spring MURI Meeting; Seattle, WA BMA Forecast Skill Comparison Local-BMA CRPS % improvement over MEC Global-BMA CRPS % improvement over MEC

  7. An Observation-Based Approach to Local-BMA Development and testing: Winter-Spring 2006 Expect it to drive the MURI “killer application”  UW ProbCast. Several “tuning” parameters available, which can hopefully be optimized. Deploy it initially for MAXT2 and MINT2 forecasts. Application to mixed discrete-continuous quantities (e.g., QPF) and 2-D quantities (wind) will require further exploration.

  8. Spring MURI Meeting; Seattle, WA An Observation-Based Approach to Local-BMA • Allow BMA parameters to vary by grid point. • Use observations, remote if necessary, as training data. • Follow the Baars et al. procedure for bias correction (optimized from the Mass-Wedam-Steed method) to also select the training data for Local-BMA. • For each grid point, search for n (e.g. 8) nearby stations (e.g. within 864-km) at similar elevation (e.g. within 250-m) and having similar land-use. • Land-use categories were concatenated into 9 categories (down from 24 in MM5). Figure shows methodology for Mass-Wedam-Steed settings.

  9. Spring MURI Meeting; Seattle, WA ----- Station: KHMS (Hanford, WA) Latitude, Longitude: 46.56, -119.60 South-north grid point: 52.289021 West-east grid point : 73.611740 obs: 71.00 F model#:1, forecast: 64.24 F model#:2, forecast: 67.12 F model#:3, forecast: 62.01 F model#:4, forecast: 60.36 F model#:5, forecast: 62.30 F model#:6, forecast: 61.59 F model#:7, forecast: 64.80 F model#:8, forecast: 66.88 F ENS-MEAN: 63.67 F ----- Maximum 2-m Temperature – Case Study

  10. Spring MURI Meeting; Seattle, WA ----- Station: KHMS (Hanford, WA) Latitude, Longitude: 46.56, -119.60 South-north grid point: 52.289021 West-east grid point : 73.611740 obs: 71.00 F model#:1, forecast: 67.56 F model#:2, forecast: 69.57 F model#:3, forecast: 65.71 F model#:4, forecast: 64.62 F model#:5, forecast: 67.23 F model#:6, forecast: 66.28 F model#:7, forecast: 68.80 F model#:8, forecast: 69.23 F Global-BMA-MEAN: 65.11 F ----- Global-BMA Mean

  11. Spring MURI Meeting; Seattle, WA ----- Station: KHMS (Hanford, WA) Latitude, Longitude: 46.56, -119.60 South-north grid point: 52.289021 West-east grid point : 73.611740 obs: 71.00 F model#:1, forecast: 67.56 F model#:2, forecast: 69.57 F model#:3, forecast: 65.71 F model#:4, forecast: 64.62 F model#:5, forecast: 67.23 F model#:6, forecast: 66.28 F model#:7, forecast: 68.80 F model#:8, forecast: 69.23 F Local-BMA-MEAN: 68.84 F ----- Local-BMA Mean

  12. Spring MURI Meeting; Seattle, WA ----- Station: KHMS (Hanford, WA) Latitude, Longitude: 46.56, -119.60 South-north grid point: 52.289021 West-east grid point : 73.611740 obs: 71.00 F model#:1, forecast: 67.56 F model#:2, forecast: 69.57 F model#:3, forecast: 65.71 F model#:4, forecast: 64.62 F model#:5, forecast: 67.23 F model#:6, forecast: 66.28 F model#:7, forecast: 68.80 F model#:8, forecast: 69.23 F BC-ENS-MEAN: 67.38 F ----- Bias-Corrected Ensemble Mean

  13. Spring MURI Meeting; Seattle, WA ----- Station: KHMS (Hanford, WA) Latitude, Longitude: 46.56, -119.60 South-north grid point: 52.289021 West-east grid point : 73.611740 obs: 71.00 F Global-BMA-95%: 72.55 F Global-BMA-MEAN: 65.11 F Global-BMA- 5%: 57.68 F ----- Global-BMA Sharpness

  14. Spring MURI Meeting; Seattle, WA ----- Station: KHMS (Hanford, WA) Latitude, Longitude: 46.56, -119.60 South-north grid point: 52.289021 West-east grid point : 73.611740 obs: 71.00 F Local-BMA-95%: 73.64 F Local-BMA-MEAN: 68.84 F Local-BMA- 5%: 63.36 F ----- Local-BMA Sharpness

  15. Spring MURI Meeting; Seattle, WA ----- Station: KHMS (Hanford, WA) Latitude, Longitude: 46.56, -119.60 South-north grid point: 52.289021 West-east grid point : 73.611740 obs: 71.00 F Local-MEC-95%: 72.15 F Local-MEC-MEAN: 67.38 F Local-MEC- 5%: 62.61 F ----- Local-MEC Sharpness

  16. Spring MURI Meeting; Seattle, WA Calibration (all stations)

  17. Spring MURI Meeting; Seattle, WA Calibration (water only)

  18. Spring MURI Meeting; Seattle, WA Sharpness (all stations)

  19. Spring MURI Meeting; Seattle, WA Sharpness (water only)

  20. Spring MURI Meeting; Seattle, WA Minimum 2-m Temperature – Same Story (water only) (all stations) (calibration) (sharpness)

  21. Spring MURI Meeting; Seattle, WA Continuous Ranked Probability Scores (water only) (all stations) (MAXT2) (MINT2)

  22. Spring MURI Meeting; Seattle, WA Next Steps • Go operational with Local-BMA for MAXT2 and MINT2. • Code almost ready. • Some issues remaining with “blank” grid points. • Parameter optimization? • Work on precip next (PoP & PQPF). • Issues with small training samples and precip. • What if all zeroes? • Probably need to modify the search parameters. • Distance to crest? • Up-slope / down-slope? Depends on terrain gradient and wind! • Wind (2-D vector). • Established methods for wind speed and direction, separately. • Use gamma and Von Mises mixture distributions, respectively. • Need to build an EM-like algorithm or employ CRPS (energy score) minimization for 2-D wind forecasts. • Work is being done on the CRPS (energy score) for 2-D variables. [statistics]

  23. QUESTIONS and DISCUSSION

  24. Spring MURI Meeting; Seattle, WA

  25. Spring MURI Meeting; Seattle, WA

  26. CRPS = continuous ranked probability score [Probabilistic analog of the mean absolute error (MAE) for scoring deterministic forecasts] Spring MURI Meeting; Seattle, WA FIT MEC MEC Performance with Grid-Based Verification • Comparison of *UWME 48-h 2-m temperature forecasts: • Member-specific mean bias correction applied to both [14-day running mean] • FIT = Gaussian fit to the raw forecast ensemble • MEC = Gaussian fit to the ensemble-mean + the mean error climatology [00 UTC Cycle; October 2002 – March 2004; 361 cases]

  27. Spring MURI Meeting; Seattle, WA MEC BMA Local-BMA Forecast Performance • After several attempts to implement BMA with local or regional training data, EXCELLENT results were achieved: • when the training data is selected from a neighborhood* of grid points with similar land-use type and elevation • Example application to 48-h 2-m temperature forecasts uses only 14 training days. • Dramatic improvements in CRPS nearly everywhere. *neighbors have same land use type and elevation difference < 200 m within a search radius of 3 grid points (60 km)

  28. Spring MURI Meeting; Seattle, WA Member-specific mean-bias correction parameters Member-specific BMA weights BMA variance (not-member specific here, but can be) An Advanced Calibration Method Bayesian Model Averaging (BMA) Summary • BMA has several advantages over MEC: • A time-varying uncertainty forecast. • A way to keep multi-modality, if it is warranted. • Maximizes information from short (2-4 week) training periods. • Allows for different relative skill between members through the BMA weights (multi-model, multi-scheme physics). [c.f. Raftery et al. 2005, Mon. Wea. Rev.]

  29. Spring MURI Meeting; Seattle, WA Extending BMA to Non-Gaussian Variables • For quantities such as wind speed and precipitation, distributions are not only non-Gaussian, but not purely continuous – there are point masses at zero. • For probabilistic quantitative precipitation forecasts (PQPF): • Model P(Y=0) with a logistic regression. • Model P(Y>0) with a finite Gamma mixture distribution. • Fit Gamma means as a linear regression of the cubed-root of observation on forecast and an indicator function for no precipitation. • Fit Gamma variance parameters and BMA weights by the EM algorithm, with some modifications. [c.f. Sloughter et al. 200x, manuscript in preparation]

More Related