1 / 18

Aaron Johnson and Xuguang Wang

Verification and calibration of probabilistic precipitation forecasts derived from neighborhood and object based methods for a convection-allowing ensemble. Aaron Johnson and Xuguang Wang School of Meteorology and Center for Analysis and Prediction of Storms

raine
Download Presentation

Aaron Johnson and Xuguang Wang

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Verification and calibration of probabilistic precipitation forecasts derived from neighborhood and object based methods for a convection-allowing ensemble Aaron Johnson and Xuguang Wang School of Meteorology and Center for Analysis and Prediction of Storms University of Oklahoma, Norman, OK Acknowledgement: F. Kong, M. Xue, K. Thomas, K. Brewster, Y. Wang, J. Gao Warn-on-Forecast and High Impact Weather Workshop 9 February 2012

  2. Outline • Motivation and convection-allowing ensemble overview • Non-traditional methods of generating probabilistic forecasts • Calibration methods • Results • Neighborhood based • Full ensemble without calibration • Full ensemble with calibration • Sub-ensembles without calibration • Sub-ensembles with calibration • Object based • Full ensemble without calibration • Full ensemble with calibration • Sub-ensembles without calibration • Sub-ensembles with calibration

  3. Forecast example • Hourly accumulated precipitation • Near-CONUS domain • Subjective impressions of storm structures

  4. Motivation • Numerous calibration studies for meso- and global-scale ensembles (e.g., Wang and Bishop 2005, Wilks and Hamill 2007, Sloughter et al. 2007) • How do different probabilistic forecast calibrations compare at convection-allowing resolution? • Neighborhood methods relax grid point sensitivity of high resolution forecasts (e.g., Ebert 2009) while object based methods retain storm scale features but are typically applied to deterministic forecasts (e.g., Davis et al. 2006, Gallus 2010). • How skillful are such non-traditional probabilistic forecasts before and after calibration? • How to generate probabilistic forecasts at convection-allowing resolution? • 2009 CAPS ensemble forecasts for HWT Spring Experiment clustered according to WRF model dynamics (Johnson et al. 2011) • Is multi-model necessary? Is the conclusion changed before and after calibration?

  5. Assimilation of radar reflectivity and velocity using ARPS 3DVAR and cloud analysis for 17 members 10 members are from WRF-ARW, 8 members from WRF-NMM, and 2 members from ARPS. 20 members initialized 00 UTC, integrated 30 hours over near-CONUS domain on 26 days from 29 April through 5 June 2009, on 4 km grid without CP. Initial background field from 00 UTC NCEP NAM analysis. Coarser (~35 km) resolution IC/LBC perturbations obtained from NCEP SREF forecasts • Perturbations to Microphysics, Planetary Boundary Layer, Shortwave Radiation and Land Surface Model physics schemes.

  6. Methods of Generating Probabilistic Forecasts • Neighborhood based probabilistic forecasts • Event being forecast: accumulated precipitation exceeding a threshold • Probability obtained from: percentage of grid points within a search radius (48 km) from all members that exceed the threshold Object based probabilistic forecasts • Event being forecast: object of interest • Probability obtained from: percentage of ensemble members in which the forecast object occurs Figure 8 from Schwartz et al. (2010)

  7. Definition of Objects

  8. Calibration Methods • Reliability diagram method: forecast probability is replaced with observed frequency during training • Schaffer et al. (2011) method: Extension of the reliability diagram method by including more parameters. • Logistic Regression: • Neighborhood based • x1 = mean of NP0.25 • x2 = standard deviation of NP0.25 • Object based • x1 = uncalibrated forecast probability • x2 = ln(area) • Bias Adjustment of each member: Adjust values so CDF of forecasts matches observations (Hamill and Whitaker 2006)

  9. Neighborhood Results: Uncalibrated Full Ensemble • Diurnal cycle for most thresholds • Skill also depends on threshold/accumulation period

  10. Neighborhood Results: Calibrated Full Ensemble • Skill improvement limited to the periods of skill minima • During skill minima, similar improvements from all calibrations

  11. Neighborhood Results: Uncalibrated Sub-Ensembles • ARW significantly more skillful than NMM for almost all lead times and thresholds • Multi-Model is not significantly more skillful than ARW

  12. Neighborhood Results: Calibrated Sub-Ensembles • Differences among different sub-enembles are reduced. • Multi-Model only shows advantages at 27-30 hour lead times.

  13. Object Based Results: Full Ensemble • Uncalibrated: • Skill minimum during first 6 hours when members tend to be too similar (i.e., underdispersive) • Lower skill than neighborhood based • Lower skill for hourly than 6 hourly • Calibrated: • Bias adjustment is the least effective and LR is the most effective.

  14. Object Based Results: Sub-Ensembles Uncalibrated: • ARW significantly more skillful than NMM. • Multi-model did not show advantage compared to ARW Calibrated: Again, more skillful after calibration and more skillful for longer accumulation period. Like the neighborhood probabilistic forecasts, differences in skill among different sub-ensembles are reduced by calibration.

  15. Conclusions • Probabilistic precipitation forecasts from a convection allowing ensemble for the 2009 NOAA HWT Spring Experiment were verified and calibrated. • Probabilistic forecasts were derived from both the neighborhood method and a new object based method. • Various calibrations including reliability based, logistic regression, and individual member bias correction methods were implemented. • For both the neighborhood and the object based probabilistic forecasts, calibration significantly improved the skill of the forecasts compared to the non-calibrated forecast during skill minima. • For the neighborhood probabilistic forecasts, skill of different calibrations were similar • For the object based probabilistic forecast, the LR method is most effective. • Sub-ensembles from ARW and NMM are also verified and calibrated for the purpose of guiding optimal ensemble design. • ARW was more skillful than NMM for both neighborhood and object based probabilistic forecasts • The difference in skill was reduced by calibration • Multimodel ensemble of ARW and NMM members only shows advantages compared to single model ensemble after the 24 hour lead time for the neighborhood based forecasts.

  16. Example of object based method CONTROL FORECAST A B Probability of occurrence is forecast for control forecast objects, A and B. Other panels are forecasts from the other members. Forecast probability of A is 1/8=12.5% Forecast probability of B is 7/8=87.5%

  17. Sensitivity of Neighborhood based calibrations to training length

  18. Sensitivity of Object based calibrations to training length

More Related