1 / 44

Verification

Verification. Lisa Holts Kevin Werner. Outline. Motivation Verification capabilities Example. Motivations. Improve understanding of uncertainty from forecast tools How good are our forecasts? How much value do human forecasters add? Is one tool better than another?

josephray
Download Presentation

Verification

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Verification Lisa Holts Kevin Werner

  2. Outline • Motivation • Verification capabilities • Example

  3. Motivations • Improve understanding of uncertainty from forecast tools • How good are our forecasts? • How much value do human forecasters add? • Is one tool better than another? • How do the above questions depend on lead time? Amount of runoff? • Can we use what we learn to: • Improve forecasts? • Make forecast process more efficient? • Improve collaboration? • Convey uncertainty to our users?

  4. Verification Capabilities • Search By • Station ID • State, River, Location

  5. Verification Capabilities • Select Forecast Period • Forecasts Available • Climatology Available

  6. Verification Capabilities • Select Years • Appear on Period Selection • Select any set of years available

  7. Verification Capabilities • Select Data Source • All Selected By Default

  8. Verification Capabilities • Select Plot Type • Historical By Default

  9. Verification Capabilities • Load Statistics

  10. Verification Capabilities • Navigation Bar • Water Supply Forecasts • Verification • Data Checkout

  11. Verification Capabilities • Location • Change • Clear • Return to main menu

  12. Verification Capabilities • Plot Area • Auto Scroll • Clearly Labeled

  13. Verification Capabilities • About • Explains the displayed graph • Changes when graph is changed

  14. Verification Capabilities • Site Options • Saves five previous sites viewed • Print Graph • Display data in Data Checkout

  15. Verification Capabilities • Side Options • Statistic • Forecast Type/Data Source • Time Scale • Threshold

  16. Verification Capabilities • Statistic • 18 Statistic choices • Graph change on click • Mouse-over displays info about graph • Group Titles • Mouse and click to display other options

  17. Verification Capabilities • Forecast Type • Change Data Sources to be displayed • Graph change on click

  18. Verification Capabilities • Time Scale • Change Period • Modify Years • Graph change on click • Month • Only displayed when Contingency Table Statistic is chosen • Change the month the table displays

  19. Verification Capabilities • Threshold • Default is Climatology / Historical Average • Enter value in KAF and press enter • Type ‘mean’ to return to default • Valid option for all but Rank Histogram statistic options. • Graph change on ‘Enter’

  20. Assignment • Loose instructions provided • Results should drive study • Goal is not to see five sets of the same plots… rather something focused on the interesting parts for the basin involved

  21. Assignment (con’t) • Step 1: Identify forecast points with extensive data and interest • Step 2: Examine data for visual clues • Step 3: Error and skill scores • Step 4: Categorical statistics • Step 5: Forecast uncertainty • Step 6: Climate variability • Step 7: Snow / Precipitation connections

  22. Example • DIRC2 has forecast / observation pairs from 1991 on. • We’ll look at the available plots and then consolidate it into the case study…

  23. Step 2: Examine data

  24. Step 3: Error and Skill score

  25. Error by year • Comparison of forecast error to “average” error useful diagnostic tool

  26. Percent Difference

  27. Step 4: Categorical:Probability of Detection • High years much for difficult to detect in the early season • All forecasts during low years have been for low volumes

  28. False Alarm Rate • Similar story here as with POD

  29. Step 5: Forecast Uncertainty:Forecast Distribution • Some tendency to underforecast. • 26% of observed streamflow falls above the 10% exceedence forecast value • Note that results improve if you consider only 2000-2008

  30. Forecast Uncertainty by month

  31. Observed Lag-1 Analysis

  32. Results that might be presented …

  33. ESP is really, really good • ESP is much better than any other forecast method particularly in high years • With “real” ESP as opposed to ESP reforecasts, results should be even better

  34. Forecast Quality • January forecasts are essentially as good as climatology • Coordination process appears to add (marginal) value except in April. • Forecast tweaks much less than the error should not be entertained unless some overriding rational exists to support those tweaks.

  35. High flows hardLow flows no so hard • Forecast system is perfect (POD = 100% in all cases) for detecting below average flows. • Forecasts struggle with detecting high flows even through May.

  36. Reasonable max not so reasonable • Observed streamflow greater than the reasonable max nearly 30% of the time. • Problem in the 1990s… results since 1999 are much better

  37. Climate variability not much help • Very low predictability based on prior year hydrology or climate index • Weak tendency for low years to follow low years

  38. Possible Application(Discussion) • April DIRC2 forecast is 150kAc-ft. Average error for April is 35kAc-ft. How could you use this information to: • Improve your forecast? • Improve your forecast process? • Improve forecast application?

More Related