1 / 14

Verification techniques for high resolution NWP precipitation forecasts

Verification techniques for high resolution NWP precipitation forecasts. Emiel van der Plas ( plas@knmi.nl ) Kees Kok Maurice Schmeits. Introduction. NWP has come a long way… It was: Then it became Hirlam: Now it is Harmonie It should be GALES (or so) It looks better…

keegan-rice
Download Presentation

Verification techniques for high resolution NWP precipitation forecasts

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Verification techniques for high resolution NWP precipitation forecasts Emiel van der Plas (plas@knmi.nl) KeesKok Maurice Schmeits

  2. Introduction NWP has come a long way… It was: Then it became Hirlam: Now it is Harmonie It should be GALES (or so) It looks better… But how is it better? Does it perform better? That remains to be seen… 2

  3. Representation: “double penalty” Forecast localised phenomena: False alarm + Miss = double penalty Station (gauge) data: Forecast vs Radar data: When we take point-by-point errors (ME/RMSE): 3

  4. This talk HARP: Hirlam Aladin R-based verification Packages Tools for spatial, ensemble verification Based on R FSS, SAL, … Relies on eg SpatialVX package (NCAR) Generalized MOS approach Comparison high vs low resolution Hirlam (11 km, hydrostatic) Harmonie (2.5 km, non-hydrostatic, w/ & w/o Mode-S) ECMWF (T1279, deterministic) Lead times: +003, +006, +009, +012 Accumulated precipitation vs (Dutch) radar, synop 4

  5. Neo-classical: neighborhood methods, FSS Options: FSS, ISS, SAL, … Fraction Skill Score (fuzzy verification) (Roberts & Lean, 2008) Straightforward interpretation ‘Resolves’ double penalty But ‘smoothes’ away resolution that may contain information! ( Vstorm D t ) == upscaling Baserate , FSS  forecast observation 5

  6. FSS results: Differences are sometimes subtle: 1x1 3x3 7

  7. FSS: more results Higher resolutions: higher thresholds? DMO! 8

  8. How would a trained meteorologist look at direct model output? Model Output Statistics Learn for each model, location, … separately! 9 9/15

  9. Model Output Statistics Construct a set of predictors (per model, station, starting and lead time): For now: use precipitation only Use various ‘areas of influence’: 25,50,75,100 km DMO, coverage, max(DMO) within area, distance to forecasted precipitation, … Apply logistic regression Forward stepwise selection, backward deletion Probability of threshold exceedance! Verify probabilities based on DMO, coefficients of selected predictors Training data: day 1-20, `independent’ data: day 21 – 28/31 10

  10. Model (predictor) selection Based on AIC (Akaike Information Criterion) Take the predictor with highest AIC in training set (day 1 - 20) Test on independent set (day 21 – 28/31) Sqrt(max)_100 More predictors != more skill Sqrt(tot_100) distext_100 exp2int_100 11

  11. Model comparison (April – October 2012) Hirlam, Harmonie (based on Hirlam) ECMWF 12UTC+003 12UTC+006 12UTC+009 13

  12. Discussion, to do MOS method: Stratification per station, season, … More data necessary, reforecasting under way Representation error: take (small) radar area Use ELR, conditional probabilities for higher thresholds Extend to wind, fog/visibility, MSG/cloud products, etc FSS: Use OPERA data 15/15

  13. Conclusion/Discussion Comparison between NWP’s of different resolution is, well, fuzzy Realism != Score Fraction Skill Score yields numbers, but sometimes hard to draw conclusions MOS method: Resolution/model independent Takes into account what we know Doubles (potentially) as predictive guide Thank you for your attention! 16/15

  14. Binary predictand yi (here: precip > q) Probability: logistic: Joint likelihood: L2 penalisation (using R: stepPLR by Mee Young Park and Trevor Hastie, 2008): minimise Use threshold (sqrt(q)) as predictor: complete distribution function (Wilks, 2009) Few cases, many potential predictors: pool stations, max 5 terms Extended Logistic Regression (ELR) 17 17/15

More Related