1 / 22

How Good Does a Forecast Really Need To Be?

How Good Does a Forecast Really Need To Be?. David Myrick Western Region Headquarters Scientific Services Division. Motivating Question. Can we use uncertainty information as a threshold for gauging when a forecast is good enough? This is an informal talk! Lots of examples

Download Presentation

How Good Does a Forecast Really Need To Be?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. How Good Does a Forecast Really Need To Be? David Myrick Western Region Headquarters Scientific Services Division NFUSE Conference Call 4/11/07

  2. Motivating Question • Can we use uncertainty information as a threshold for gauging when a forecast is good enough? • This is an informal talk! • Lots of examples • Approach question from the viewpoint of observational uncertainty NFUSE Conference Call 4/11/07

  3. Points Area(Grid Boxes) • No longer forecasting for 8-10 CCF points • Each CWA – 1000’s of 2.5 or 5 km grid boxes • Twofold need for grid-based verification: • Forecaster feedback across the entire grid • Identifying ways to evolve our services to focus more attention on high impact events NFUSE Conference Call 4/11/07

  4. WR Service Improvement Project • Initially began as a grid-based verification project using BOIVerify • Morphed into learning how we can evolve our services to focus more effort on high impact events • Project got us thinking about: “What is a good forecast for a small area?” NFUSE Conference Call 4/11/07

  5. Observations • Grid-based verification requires an objective analysis based on ASOS & non-ASOS observations • Lots of known problems with surface & analysis data • Ob = Value ± Uncertainty NFUSE Conference Call 4/11/07

  6. Observational Errors • Instrument errors • Gross errors • Siting errors • Errors of “representativeness” Photo: J. Horel NFUSE Conference Call 4/11/07

  7. Errors of “representativeness” • Observation is accurate • Reflects synoptic & microscale conditions • But… the microscale phenomena it captures is not resolvable by analysis or model • Example: cold pool in narrow valley • Observation on valley floor may be correct • Not captured by analysis system NFUSE Conference Call 4/11/07

  8. Representativeness ErrorTemperature (oC) Example Tooele Valley +9 -1 Rush Valley NFUSE Conference Call 4/11/07 www.topozone.com

  9. Variability in Observations • Examples - WR/SSD RTMA Evaluation • Comparing analysis solutions along a terrain profile near SLC, UT • ~70 mesonet obs in a 60 x 60 km area Great Salt Lake Wasatch Mountains ~60 km NFUSE Conference Call 4/11/07

  10. Large Spread in Observations How do we analyze this? >11oC spread between 1400-1700 m NFUSE Conference Call 4/11/07

  11. Objective Analysis 101 • Analysis Value = Background Value + Observation Corrections • Analysis Errors come from: • Errors in the background field • Observational errors • A “good” analysis takes into account the uncertainty in the obs & background • A “best fit” to the obs • Won’t always match the obs NFUSE Conference Call 4/11/07

  12. Forecast Verification • Forecasters are comfortable with: • Verification against ASOS obs • Assessing forecast skill vs. MOS • But is judging a forecast against a few points without any regard for observational and representativeness errors really the scientific way to verify forecasts? • Is there a better way? • Can we define a “good enough” forecast? NFUSE Conference Call 4/11/07

  13. Proposal • Evaluate grid-based forecasts vs. RTMA • Use RTMA to scientifically assign uncertainty • Reward forecasts that are within the bounds of analysis uncertainty NFUSE Conference Call 4/11/07

  14. RTMA Uncertainty Estimates • RTMA/AOR provides a golden opportunity to revamp verification program • Analysis uncertainty varies by location • Techniques under development at EMC to assign analysis uncertainty to RTMA • Backing out an estimate of the analysis error by taking the inverse of the Hessian of the analysis cost function • Cross Validation (expensive) NFUSE Conference Call 4/11/07

  15. Example • Verify forecasts based on the amount of uncertainty that exists in an analysis • Example: • Forecast = 64oF • Analysis Value = 66oF • Analysis Uncertainty = +/- 3oF • No penalty for forecasts between 63-69oF (the forecast fell in the “good enough” range) • This is a “distributions-oriented” approach… NFUSE Conference Call 4/11/07

  16. “Distributions-oriented” forecast verification • Murphy and Winkler (1987) – original paper • Brooks and Doswell (1996) - reduced dimensionality problem by using wider bins NFUSE Conference Call 4/11/07

  17. Problem with “distributions” approach • Brooks and Doswell (1996) example used 5oF bins • Setup bins -5 to 0oF, 0 to 5oF, 5 to 10oF etc. • Forecast = 4.5oF • Verification = 0.5oF = good forecast • Verification = 5.5oF = bad forecast NFUSE Conference Call 4/11/07

  18. Myrick and Horel (2006) • Verified NDFD grid-based forecasts using floating bins whose width was based on the observational uncertainty (~2.5oC) NFUSE Conference Call 4/11/07

  19. Temperature (oF) Forecast Example 3 54 58 2 56 Populated Valley 2 60 58 3 62 4 60 Mountains 5 Forecast RTMA RTMA Uncertainty Green= Forecasts are “good enough” Red = abs(RTMA – Forecast) > Uncertainty NFUSE Conference Call 4/11/07

  20. Summary • Challenge: How do we define a “good enough” forecast • Proposal: • Verify against RTMA ± Uncertainty • Uncertainty based on observational, representativeness, & analysis errors • Give the forecaster credit for forecast areas that are within the uncertainty • Goal: Provide better feedback as to which forecast areas are “good enough” and which areas need more attention NFUSE Conference Call 4/11/07

  21. Special Thanks! • Tim Barker (BOI WFO) • Brad Colman (SEW WFO) • Kirby Cook (SEW WFO) • Andy Edman (WR/SSD) • John Horel (Univ. Utah) • Chad Kahler (WR/SSD) • Mark Mollner (WR/SSD) • Aaron Sutula (WR/SSD) • Ken Pomeroy (WR/SSD) • Manuel Pondeca (NCEP/EMC) • Kevin Werner (WR/SSD) NFUSE Conference Call 4/11/07

  22. References Brooks H. E., and C. A. Doswell, 1996: A comparison of measures-oriented and distributions-oriented approaches to forecast verification. Wea. Forecasting, 11, 288–303. Murphy A. H., and R. L. Winkler, 1987: A general framework for forecast verification. Mon. Wea. Rev., 115, 1330–1338. Myrick, D. T., and J. D. Horel, 2006: Verification of surface temperature forecasts from the National Digital Forecast Database over the Western United States. Wea. Forecasting. 21, 869-892. Representativeness Errors – Western Region Training Module: http://ww2.wrh.noaa.gov/ssd/digital_services/training/Rep_Error_basics_final Western Region Service Evolution Project Internal Page: http://ww2.wrh.noaa.gov/ssd/digital_services/ NFUSE Conference Call 4/11/07

More Related