1 / 19

Assessing added value of high resolution forecasts Emiel van der Plas Maurice Schmeits, Kees Kok

Assessing added value of high resolution forecasts Emiel van der Plas Maurice Schmeits, Kees Kok KNMI, The Netherlands. Introduction. Question: do high resolution (convection resolving) models perform better than the previous generation models? T2M, wind, precipitation! KNMI:

Download Presentation

Assessing added value of high resolution forecasts Emiel van der Plas Maurice Schmeits, Kees Kok

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Assessing added value of high resolution forecasts Emiel van der Plas Maurice Schmeits, Kees Kok KNMI, The Netherlands

  2. Introduction Question: do high resolution (convection resolving) models perform better than the previous generation models? T2M, wind, precipitation! KNMI: Harmonie (2.5 km) > Hirlam(11, 22 km) ? Harmonie > ECMWF (deterministic run: T1279)? Verification of high resolution NWP forecasts is challenging Precipitation: highly localised Radar/stationdata: double penalty! If there is extra skill, how to demonstrate objectively? In this talk: Fuzzy methods and Model Output Statistics

  3. Set-up Harmonie (‘ECJAN’): 2.5 km, 300x300 domain, AROME physics, 3DVAR ECMWF boundaries Run with 800x800 points, Hirlam boundaries: no sufficient archive available… Hirlam (D11): 22 km, 136 x 226, 3DVAR ECMWF Operational (T1279) ±16 km, global, 3DVAR Radar: Dutch precipitation radar composite 1 km • Period: 1st February 2012 - 31st May 2012 • All output resampled to Harmonie grid • (nearest neighbour)

  4. Example of Direct Model Output E.g. frontal precipitation, 7th March 2012 ECJAN, Hirlam, ECMWF, Radar RADAR Harmonie D11 ECMWF

  5. Neo-classicalverification: fuzzymethods • MET: suite of verification tools by NCAR (WRF) • Grid based scores: with respect to gridded radar observations • Fractions Skill Score (Roberts, Lean 2008) • Hanssen-Kuiper discriminant, Gilbert Skill Score (ETS), … • Object based scores (not in this paper) GSS, 25x25, > 2mm/3h FSS, 3x3, > 1mm/3h

  6. MOS: what is relevant in DMO? How would a trained meteorologist look at direct model output? ?

  7. Predictors How would a trained meteorologist look at direct model output? 7/15

  8. Predictors How would a trained meteorologist look at direct model output? 8/15

  9. Predictors How would a trained meteorologist look at direct model output? 9/15

  10. Model Output Statistics: predictive potential Construct a set of predictors (per model, station, starting and lead time): For now: use precipitation only Use various ‘areas of influence’: 25,50,75,100 km DMO, coverage, max(DMO) within area, distance to forecasted precipitation, … , threshold! Apply (extended) logistic regression [Wilks 2009] Use threshold (sqrt(q)) as predictor: complete distribution function (Wilks, 2009) Forward stepwise selection, backward deletion using R: stepPLR (Mee Young Park and Trevor Hastie, 2008) Verify probabilities based on coefficients of selected predictors in terms of reliability diagrams, Brier Skill Score

  11. Results: example poor skill Harmonie 00UTC+003 ECMWF D11

  12. Results: example good skill Harmonie 00UTC+012 ECMWF D11

  13. Outlook • No conclusive results • Grid-based, “fuzzy” methods suggest reasonable skill for high resolution NWP model (Harmonie) • MOS: mixed bag • Frontal systems (FMAM) well captured by hydrostatic models • To do: • Larger dataset • Training data, independent data • Convective season: more cases, higher thresholds • Include Harmonie run on large domain • …

  14. Binary predictand yi (here: precip > q) Probability: logistic: Joint likelihood: L2 penalisation (using R: stepPLR by Mee Young Park and Trevor Hastie, 2008): minimise Use threshold (sqrt(q)) as predictor: complete distribution function (Wilks, 2009) Few cases, many potential predictors: pool stations, max 5 terms Extended Logistic Regression (ELR)

  15. Period 1st February 2012 -31st May 2012 The archive available for Harmonie was the limiting factor Mostly frontal precipitation ECJAN D11 ECMWF RADAR

  16. Period: base rate (HSS, HK, FBIAS) HK

  17. Verification: classical, Fraction Skill Score Classical or categorical verification, eg: Hanssen-Kuiper discriminant, (aka True Skill Statistic, Peirce Skill Score) (a d – b c)/(a + c)(b + d) Fraction Skill Score: (Roberts & Lean, 2008) Straightforward interpretation but: Double penalty CTS Observed yesno Forecast yes| a b no | c d

  18. Verification: MODE (object based), wavelets MET provides access to MODE analysis: “Method for Object-based Diagnostic Evaluation” Forecast, observation: convolution, thresholded, … FC OBS

  19. Verification: MODE (object based), wavelets MET provides access to MODE analysis: “Method for Object-based Diagnostic Evaluation” … merged, matched and compared. Center of mass Area, Angle, Convex hull, … OBS FC

More Related