1 / 17

NWS Hydrologic Forecast Verification Team: Status and Discussion

This document discusses the standard verification strategies for hydrological forecasts, including recommended metrics, products, and analyses. It also explores the optimization of QPF horizon and the impact of run-time modifications on forecasts. The recommendations aim to improve the accuracy and usefulness of hydrological forecasts.

Download Presentation

NWS Hydrologic Forecast Verification Team: Status and Discussion

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. NWS Hydrologic Forecast Verification Team:Status and Discussion Julie Demargne OHD/HSMB Hydrologic Ensemble Prediction (HEP) group http://www.nws.noaa.gov/ohd/rfcdev/projects/rfcHVT_chart.html UCAR

  2. CHPS Verification Service Verification Statistics & Products Forecast Products Verification Component Display/Product Generator IVP FEWS Spatial Display FEWS Time Series Display EVS WR Water Supply Web Graphics Generator Interactive Forecasting System Real-Time Verification(analogs, summary stats…) Databases Archive files External data

  3. RFC Contribution • Input from RFCs is crucialto provide useful verification information to • modelers and forecasters to guide improvements of forecasting system • users to maximize utility of forecasts in their decisions • Key questions: • What verification metrics/products are the most meaningful? • What are the best methods to supply verification information?

  4. 1st Team Charter • July 07:1st team charter to propose standard verification strategies (metrics, products, and verification analyses) by Sep. 2009 • August 07:1st RFC verification workshop • November 08:2nd RFC verification workshop • July-August 09: presentation at HIC meeting, feedback from HICs on 2nd team charter draft (5 HICs replied) • Main feedback: include SCHs to develop meaningful products • September 09: final team report on recommended standard verification strategies and future team activities (2nd team charter)

  5. 2nd Team Charter • October 09: finalized 2nd team charter for Oct. 2009 - Sep. 2011 • Mission: • test verification standards (RFC case studies) • perform user analysis of products w/ SCHs and OCWWS/HSD and help develop requirements for dissemination • support design/development of CHPS Verification Service • Main deliverables: • report on improved verification standards and case studies done by all RFCs and by OHD/HSMB/HEP • prototype functionalities based on IVP, EVS, and CHPS to generate standard verification products

  6. Status on 2nd Team • 2 meetings in Dec. 09 and Feb. 10: • presentation of OHD verification activities (FY10 plan) • presentation of MBRFC and NCRFC QPF verification case studies • survey on real-time verification (analog query and summary verification displays): feedback from all RFCs

  7. Clarifications on Team Activities • Verification team report listedonly recommendations • Proposed sensitivity analyses for RFCs: • what is the optimized QPF horizon for hydrologic forecasts? • do run-time mods made on the fly improve forecasts? • Each HIC decides how the RFC team member(s) may contribute to the team • Each RFC defines its own verification case study to test proposed verification standards

  8. Recommendations: 1) QPF horizon • Goal: what is the optimized QPF horizon for hydrologic forecasts? • QPF horizon to test • 0 (no QPF), 6-hr, 12-hr, 18-hr, 24-hr, 30-hr, 36-hr, 48-hr, 72-hr • Longer (96-hr, 120-hr…): depends on RFC and location • Model states to use • Similar to operational mods except mods that impact QPF • Metadata to store which mods were used in these runs • What forecast to verify • 6-hr stage forecasts for 7-day window (longer window for slow response basins)

  9. Recommendations: 2) run-time MODs • Goal: do run-time mods made on the fly improve forecasts? • 4 scenarios • Operational forecasts (w/ all mods) • Forecasts w/ best available obs. and fcst. inputs wo/ on-the-fly mods • Forecasts w/ best available obs. inputs (no fcst) w/ all mods • Forecasts w/ best available obs. inputs (no fcst) wo/ on-the-fly mods • What forecast to verify • 6-hr stage forecasts for same window as in operations • Model states • Carryover from 5 days ago (w/ past mods) + a priori mods (known before producing any forecast)

  10. Discussion • Other ways to explore QPF horizon and run-time mods? • Feedback on RFC’s contribution to verification? • How to best coordinate between Verification Team, CAT/CAT2 Teams, Graphics Generator Requirements Team, SCHs and OCWWS/HSD? • Feedback on future verification activities at OHD?

  11. Thank you!Extra slides

  12. Final team report: recommendations • Key verification metrics for 4 levels of information for single-valued and probabilistic forecasts • Data information (scatter plots, box plots, time series plots) • Summary information (e.g. skill scores) • More detailed information (e.g. measures of reliability, resolution, correlation) • Sophisticated information (e.g. for specific events) • Corresponding verification products

  13. Recommended metrics • 4 different levels of information

  14. Recommended analyses • Analyze any new forecast process with verification • Use different temporal aggregations(e.g. weekly max. flow) • Analyze verification score as a function of lead time; if similar performance, data can be pooled from different lead times • Perform spatial aggregation carefully • Analyze results for each basin and results plotted on spatial maps • Use normalized metrics (e.g. skill scores) • Evaluate forecast performance under different conditions • w/ time conditioning (by month and season) • w/ atmospheric/hydrologic conditioning using probability and absolute thresholds (e.g. 90th percentile vs. Flood Stage) • Report verification scores with sample size(in the future, confidence intervals)

  15. January October April Diagnostic verification products • Example:skill score maps by months Smaller score, better

  16. Analog 3 Analog 2 Observed Live forecast Analog Observed Analog Forecast Real-time verification: analogs • Approach: select analogs from a pre-defined set of historical events and compare with ‘live’ forecast Analog 1 “Live forecast for Flood is likely to be too high”

  17. www.nws.noaa.gov/ohd/rfcdev/projects/rfcHVT_chart.html

More Related