1 / 23

Barbara Brown 1 , Edward Tollerud 2 , Tara Jensen 1 , and Wallace Clark 2 1 NCAR, USA

NWP Verification with Shape-matching Algorithms: Hydrologic Applications and Extension to Ensembles. Barbara Brown 1 , Edward Tollerud 2 , Tara Jensen 1 , and Wallace Clark 2 1 NCAR, USA 2 NOAA Earth System Research Laboratory, USA bgb@ucar.edu ECAM/EMS 2011 14 September 2011.

nusa
Download Presentation

Barbara Brown 1 , Edward Tollerud 2 , Tara Jensen 1 , and Wallace Clark 2 1 NCAR, USA

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. NWP Verification with Shape-matching Algorithms: Hydrologic Applications and Extension to Ensembles Barbara Brown1, Edward Tollerud2, Tara Jensen1, and Wallace Clark2 1NCAR, USA 2NOAA Earth System Research Laboratory, USA bgb@ucar.edu ECAM/EMS 2011 14 September 2011

  2. DTC and Testbed Collaborations • Developmental Testbed Center (DTC) • Mission: Provide a bridge between the research and operational communities to improve mesoscale NWP • Activities: Community support (e.g., access to operational models); Model testing and evaluation • Goals of interactions with other “testbeds”: • Examine latest capabilities of high-resolution models • Evaluate impacts of physics options • New approaches for presenting and evaluating forecasts

  3. Testbed collaborations • HydrometeorologicalTestbed (HMT) • Evaluation of regional ensemble forecasts (including operational models) and global forecasts in western U.S. (California) • Winter precipitation • Atmospheric Rivers • Hazardous Weather Testbed (HWT) • Evaluation of storm scale ensemble forecasts • Late spring precipitation, reflectivity, cloud top height • Comparison of model capabilities for high impact weather forecasts

  4. Testbed Forecast Verification • Observations • HMT: Gauges and Stage 4 gauge analysis • HWT: NMQ 1-km radar and gauge analysis; radar • Traditional metrics • RMSE, Bias, ME, POD, FAR, etc. • Brier score, Reliability, ROC, etc. • Spatial approaches Spatial approaches are neededfor evaluation of ensemble forecasts for same reasons as for non-probabilistic forecasts (“double penalty”, impact of small errors in timing and location etc.) • Neighborhood methods • Method for Object-based Diagnostic Evaluation (MODE)

  5. New Spatial Verification Approaches Web site: http://www.ral.ucar.edu/projects/icp/ Neighborhood Successive smoothing of forecasts/obs Object- and feature-based Evaluate attributes of identifiable features Scale separation Measure scale-dependent error Field deformation Measure distortion and displacement (phase error) for whole field

  6. HMT: Standard Scores for Ensemble Inter-model QPF Comparisons • Example: RMSE results for December 2010 • Dashed – HMT (WRF) ensemble members • Solid: Deterministic members • Black: Ens Mean

  7. HMT Application: MODE OBS Ens Mean Ens Mean 19 December 2010, 72-h forecast, Threshold for Precip > 0.25”

  8. MODE Application to atmospheric rivers • QPF vs. IWV and Vapor Transport • Capture coastal strike timing and location • Large impacts on precipitation in the California Coast and Coastal mountains => Major flooding impacts

  9. Atmospheric rivers SSMI Integrated Water Vapor GFS Precipitable Water Area=369 Area=312 Area=306 Area=127 72 hr 48 hr 24 hr

  10. HWT Example: Attribute Diagnostics for NWP Neighborhood & Object-based Methods - REFC > 30 dBZ FSS = 0.30 FSS = 0.64 FSS = 0.14 Neighborhood Methods provide a sense of how model performs at different scales through Fraction Skill Score. Object-Based Methods Provide a sense of how forecast attributes compare with observed. Includes a measure of overall matching skill, based on user-selected attributes 20-h 22-h 24-h Matched Interest: 0.96 Area Ratio: 0.53 Centroid Distance: 92km P90 Intensity Ratio: 1.04 • Matched Interest: 0 • Area Ratio: n/a • Centroid Distance: n/a • P90 Intensity Ratio: n/a Matched Interest: 0.89 Area Ratio: 0.18 Centroid Distance: 112km P90 Intensity Ratio: 1.08

  11. MODE application to HWT ensembles CAPS PM Mean Observed Radar Echo Tops (RETOP) RETOP

  12. As probabilities: Areas do not have “shape” of precipitation areas; may “spread” the area As mean: Area is not equivalent to any of the underlying ensemble members Applying spatial methods to ensembles

  13. Alternative: Consider ensembles of “attributes” Evaluate distributions of “attribute” errors Treatment of Spatial Ensemble Forecasts

  14. Example: MODE application to HMT ensemble members • Systematic microphysics impacts • 3 Thompson Scheme members (circled) are: • Less intense • Larger areas • Note • Heavy tails • Non-symmetric distributions for both size and intensity (medians vs. averages) 90th percentile intensity Object area >6.35 >25,4 Threshold

  15. Probabilistic Fields (PQPF) and QPF Products PROBABILITY QPF QPE Ens- 4km SREF - 32km 4km Nbrhd NAM-12km EnsMean-4km APCP Prob

  16. 50% Prob(APCP_06>25.4 mm) vs. QPE_06 >25.4 mm Good Forecast with Displacement Error? Traditional Metrics Brier Score: 0.07 Area Under ROC: 0.62 Spatial Metrics Centroid Distance: Obj1) 200 km Obj2) 88km Area Ratio: Obj1) 0.69 Obj2) 0.65 1 Obj PODY: 0.72 Obj FAR: 0.32 2 Median Of Max Interest: 0.77

  17. Summary • Evaluation of high-impact weather is moving toward use of spatial verification methods • Initial efforts in place to bring these methods forward for ensemble verification evaluation

  18. MODE-based evaluations of AR objects

  19. Spatial method motivation • Traditional approaches ignore spatial structure in many (most?) forecasts • Spatial correlations • Small errors lead to poor scores (squared errors… smooth forecasts are rewarded) • Methods for evaluation are not diagnostic • Same issues exist for ensemble forecasts Observed Forecast

  20. MODE example: 9 May 2011 Ensemble Workshop

  21. MODE Example: combined objects • Consider and compare various attributes, such as: • Area • Location • Intensity distribution • Shape / Orientation • Overlap with obs • Measure of overall “fit” to obs • Summarize distributions of attributes and differences • In some cases, conversion to probabilities may be informative • Spatial methods can be used for evaluation

  22. Spatial attributes Overall field comparison by MODE (“interest” summary) vs. lead time Object intersection areas vs. lead time

More Related