1 / 22

Outline

Real-time Verification of Operational Precipitation Forecasts using Hourly Gauge Data Andrew Loughe Judy Henderson Jennifer Mahoney Edward Tollerud Real-time Verification System (RTVS) of NOAA / FSL Boulder, Colorado USA. Outline. Some approaches to objective verification

kaloni
Download Presentation

Outline

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Real-time Verification of Operational Precipitation Forecasts using Hourly Gauge DataAndrew Loughe Judy Henderson Jennifer Mahoney Edward TollerudReal-time Verification System (RTVS) of NOAA / FSLBoulder, Colorado USA

  2. Outline Some approaches to objective verification How we perform automated precipitation verification What we mean by "real-time" Forecasts + Obs --> Results disseminated over the web. (The steps involved) QC, Model comparisons, Statistical displays Future direction

  3. If you don't have objective data you are just another person with an opinion

  4. Our Approach? • Basically, we're gross! • No really, we are... • We process 4,500 gauge measurements each hour of every day. On average we retain 2,800 "good" reports. That's 67,000 observations per day, 200,000 per month, and over 6 million per season.

  5. The Real-time Verification System • An independent, real-time, automated data ingest and management system • Gauge observations received each hour of every day (~4500) • Gross error check on observations is performed • Model forecasts interpolated to the observation points • Results stored in 2 x 2 contingency tables of forecast / observation pairs (YY, YN, NY, NN) • Graphics, skill scores and contingency information disseminated over the WorldWide Web

  6. Alternative Approaches(Should be objective) • Grid-to-grid verification • We're game... but not yet! • More fair to the modelers • Less fair to the end-users of the forecast? • More representative of the areal coverage of precipitation • Can do pattern matching and partitioning of the error (Ebert, et al.) or studies of representativeness error (Foufoula et al.)

  7. What about Case Studies?Do you fish with a pole or do you fish with a net? • We fish with a net • Case studies are insufficient for evaluating national-scale forecast systems • Subjective analyses often focus on where forecasts work well, and not on where they work poorly • There exists a need to assess variability on many time and space scales (from daily to seasonal) • Timely and objective information is needed for decision making

  8. Realtime or Near-Realtime? • Realtime processing... Monthly and Seasonal dissemination of results (for now) • Gauge data stored in hourly bins • Model data interpolated once the observations catch up (Models initialized as late as 18Z, and then 24h forecasts are made) • Data collected over numerous accumulation periods

  9. Go with the flow... • Obtain gauge data and collect it into hourly bins • Match data with list of "good" stations (QC'd list) • Interpolate model data to "good" observation points • Accumulate precipitation over 3, 6, 12, 24 hours • Compute contingency pairs (YY, YN, NY, NN) • Process these contingency data to create plots of ESS and Bias for Eta and RUC2 • Make these displays and the associated statistical information available through the web

  10. A Point-Specific Approach(Eta at 40 km)

  11. Gauge Data Checked for Accuracy • Hourly gauge data are checked for accuracy vs. radar, 24h totals, nearest neighbor • Further data are included through in-house QC efforts

  12. Forecast / Observation Comparisons • Comparisons made at numerous thresholds from 0.1 to 5.0 inches • Comparisons made over 3, 6, 12, and 24h accumulation periods

  13. 2x2 Contingency Tables

  14. Results Available over the Webwww-ad.fsl.noaa.gov/afra/rtvs/precip • Specify parameters... obtain graphical result • View contingency tables stored on disk

  15. The Future!Access and Displays via Database(Model Icing Forecasts) • Specify parameters • Display results (gnuplot) via database query (MySQL)

  16. Are these methods sufficient? • Trade off between dealing with the specifics and dealing with the general (rifle vs. shotgun) • Method is not discretized by region or event • Density of observations is not smooth • Although method is straightforward, there still is a lack of understanding for what the skill scores represent • May tell you which forecast system is "better", but not why

  17. Future Plans • Add more models to this point-specific approach, and provide a measure of confidence • Perform verification using a gridded, analyzed precipitation field (Stage IV Precipitation) • Verify the probabilistic forecasts of ensembles • Move verification data into the relational database and compute results on-the-fly • Relate verification results geographically • Access verification results as soon as the forecast period ends (timeliness)

  18. Contd, ... • Test and extend QC of the observations • Currently we are: • Assessing skill using East-only and West-only hourly station data • Assessing skill using full RFC and the in-house QC methods • Assessing skill using no QC methods whatsoever • Comparing these four experimental results

  19. ProblemNot Reporting "Zero" Precipitation?

  20. The Affect on Precipitation Verification

More Related