1 / 52

Pierre Eckert MeteoSwiss, Geneva WG4 coordinator

WG4 Activities Priority project « Advanced interpretation and verification of very high resolution models ». Pierre Eckert MeteoSwiss, Geneva WG4 coordinator. Gust diagnostics. Jan-Peter Schulz Deutscher Wetterdienst.

Download Presentation

Pierre Eckert MeteoSwiss, Geneva WG4 coordinator

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. WG4 ActivitiesPriority project « Advanced interpretation and verification of very high resolution models » Pierre EckertMeteoSwiss, Geneva WG4 coordinator

  2. Gust diagnostics Jan-Peter Schulz Deutscher Wetterdienst

  3. α = 3 : Tuning parameter : Friction velocityCD : Drag coefficient for momentum Diagnosing turbulent gusts In the COSMO model the maximum gusts at 10 m above the ground are estimated from the absolute speed of the near-surface mean wind Vm and its standard deviation σ : following Panofsky and Dutton (1984)

  4. Verification Period: 10 - 25 Jan. 2007 Mean Gust (observed) [m/s] Mean Gust/Mean Wind (observed) Mean Wind [m/s] x old + new

  5. Gust diagnostics Recommandation WG4 recommends that the formulation of wind gusts of the COMSO reference version is adapted so that the gusts are reduced. Could be affected by the choice of the vertical discretisation  Poster

  6. Thunderstorm Prediction with Boosting: Verification and Implementation of a new Base Classifier André Walser (MeteoSwiss) Martin Kohli (ETH Zürich, Semester Thesis)

  7. Output of the Learn process M base classifier Threshold classifier:

  8. AdaBoost Algorithm Iteration 1 determine base classifier G 2 calculate error, weights w 3 adapt the weights of falsely classified samples Input Weighted learn samples Number of base classifier M Classifier:

  9. C_TSTORM MAPS 17 UTC 18 UTC 19 UTC

  10. July 2006 ~7% events Random forecast

  11. The COSMO-LEPS system: getting close to the 5-year milestoneAndrea Montani, Chiara Marsigli and Tiziana PaccagnellaARPA-SIMHydrometeorological service of Emilia-Romagna, Italy IX General COSMO meeting Athens,18-21 September 2007

  12. The new COSMO-LEPS suite @ ECMWFsince February 2006 16 Representative Members driving the 16 COSMO-model integrations (weighted according to the cluster populations) employing either Tiedtke or Kain-Fristch convection scheme (randomly choosen) 3 levels 500 700 850 hPa 4 variables Z U V Q d+3 d+4 d d+5 d+1 d+2 d-1 Cluster Analysis and RM identification Cluster Analysis and RM identification older EPS 00 2 time steps younger EPS 12 European area clustering period Complete Linkage • suite running as a ”time-critical application” managed by ARPA-SIM; • Δx ~ 10 km; 40 ML; • COSM0-LM 3.20 since Nov06; • fc length: 132h; • Computer time (4.3 million BU for 2007) provided by the COSMO partners which are ECMWF member states. COSMO-LEPS Integration Domain COSMO-LEPS clustering area

  13. Dissemination • probabilistic products • deterministic products (individual COSMO-LEPS runs) • derived probability products (EM, ES) • meteograms over station points products delivered at about 1UTC to the COSMO weather services, to Hungary (case studies) and to the MAP D-PHASE and COPS communities (field campaign).

  14. Jun04: 5m  10m Feb06: 10m16m; 32ML  40 ML Time series of Brier Skill Score • BSS is written as 1-BS/BSref. Sample climate is the reference system. Useful forecast systems if BSS > 0. • BS measures the mean squared difference between forecast and observation in probability space. • Equivalent to MSE for deterministic forecast. BSS fc step: 30-42h • improvement of performance detectable for all thresholds along the years; • still problems with high thresholds, but good trend in 2007.

  15. Main results • COSMO-LEPS system runs on a daily basis since November 2002 (6 “failures” in almost 5 years of activity) and it has become a “member-state time-critical application” at ECMWF ( ECMWF operators involved in the suite monitoring). • COSMO-LEPS products used in EC Projects (e.g. PREVIEW) , field campaigns (e.g. COPS, MAP D-PHASE) and met-ops rooms across COSMO community. Time series scores cannot easily disentangle improvements related to COSMO-LEPS itself from those due to better boundaries by ECMWF EPS. • Nevertheless, positive trends can be identified: • increase in ROC area scores and reduction in outliers percentages; • positive impact of increasing the population from 5 to 10 members (June 2004); • although some deficiency in the skill of the system were identified after the system upgrades occurred on February 2006 (from 10 to 16 members; from 32 to 40 model levels + EPS upgrade!!!), scores are encouraging throughout 2007. • 2 more features: • marked semi-diurnal cycle in COSMO-LEPS scores (better skill for “night-time” forecasts); • better scores over the Alpine area rather than over the full domain (to be confirmed).

  16. Improving COSMO-LEPS forecasts of extreme events with reforecastsF. Fundel, A. Walser, M. Liniger, C. Appenzeller Poster

  17. Why can reforecasts help to improve meteorological warnings? Model Obs 25. Jun. +-14d

  18. Spatial variation of model bias Difference of CDF of observations and COSMO-LEPS 24h total precipitation 10/2003-12/2006 Model too wet, worse in southern Switzerland

  19. COSMO-LEPS Model Climatology Setup • Reforecasts over a period of 30 years (1971-2000) • Deterministic run of COSMO-LEPS (1 member) (convective scheme = tiedtke) • ERA40 Reanalysis as Initial/Boundary • 42h lead time, 12:00 Initial time • Calculated on hpce at ECMWF • Archived on Mars at ECMWF (surf (30 parameters), 4 plev (8 parameters); 3h step) • Post processing at CSCS

  20. Calibrating an EPS x Model Climate Ensemble Forecast

  21. New index Probability of Return Period exceedance PRP • Dependent on the climatology used to calculate return levels/periods • Here, a monthly subset of the climatology is used (e.g. only data from September 1971-2000) • PRP1 = Event that happens once per September • PRP100 = Event that happens in one out of 100 Septembers

  22. twice per September each Septembers once in 2 Septembers once in 6 Septembers Probability of Return Period exceedance COSMO-PRP1/2 COSMO-PRP1 COSMO-PRP2 COSMO-PRP6

  23. PRP based Warngramms twice per September (15.8 mm/24h) once per September (21 mm/24h) once in 3 Septembers (26.3 mm/24h) once in 6 Septembers (34.8 mm/24h)

  24. PRP with Extreme Value Analysis The underlying distribution function of extreme values y=x-u above a threshold u is the Generalized Pareto Distribution (GPD) (a special case of the GEV) =scale; =shape C. Frei, Introduction to EVA

  25. PRP with Extreme Value Analysis COSMO-PRP12 (GPD) COSMO-PRP60 (GPD)

  26. Priority project « Verification of very high resolution models » • Slides from • Felix Ament  Poster • Ulrich Damrath • Carlo Cacciamani • Pirmin Kaufmann  Poster

  27. Mesoscale model (5 km) 21 Mar 2004 Global model (100 km) 21 Mar 2004 Observed 24h rain Sydney Sydney RMS=13.0 RMS=4.6 Motivation for new scores Which rain forecast would you rather use?

  28. Fine scale verification: Fuzzy Methods “… do not evaluate a point by point match!” General Recipe • (Choose a threshold to define event and non-event) • define scales of interest • consider statistics at these scales for verification Scale Evaluate box statistics forecast observation  score depends on spatial scale and intensity x x Intensity

  29. A Fuzzy Verification Toolbox Ebert, E.E., 2007: Fuzzy verification of high resolution gridded forecasts: A review and proposed framework. Meteorol. Appls., submitted.Toolbox available at http://www.bom.gov.au/bmrc/wefor/staff/eee/fuzzy_verification.zip

  30. A Fuzzy Verification testbed Virtual truth (Radar data, model data, synthetic field) Fuzzy Verification Toolbox Perturbation Generator Realizations of virtual erroneousmodel forecasts Analyzer Realizations ofverification results • Assessment of • sensitivity (mean) • [reliability (STD)] • Two ingredients: • Reference fields: Hourly radar derived rain fields, August 2005 flood event, 19 time stamps (Frei et al., 2005) • Perturbations:  next slide

  31. Perturbations

  32. Perfect forecast All scores should equal ! • But, in fact, 5 out of 12 do not!

  33. Expected response to perturbations coarse spatial scale fine low highintensity Sensitivity: expected (=0.0); not expected (=1.0) Summary in terms of contrast: Contrast := mean( ) – mean( )

  34. Summary real good Contrast Leaking Scores • Leaking scores show an overall poor performance • “Intensity scale” and “Practically Perfect Hindcast” perform in general well, but … • Many score have problem to detect large scale noise (LS_NOISE); “Upscaling” and “50% coverage” are beneficial in this respect STD good

  35. August 2005 flood event Precipitation sum 18.8.-23.8.2005: (Hourly radar data calibrated using rain gauges (Frei et al., 2005)) Mean: 106.2mm Mean: 73.1mm Mean: 62.8mm Mean: 43.2mm

  36. Fuzzy Verification of August 2005 flood Based on 3 hourly accumulations during August 2005 flood period (18.8.-23.8.2005) COSMO-2 COSMO-7 Scale(7km gridpoints) Intensitythreshold (mm/3h) good bad

  37. Fuzzy Verification of August 2005 flood Difference of Fuzzy Scores COSMO-2 better neutral Scale(7km gridpoints) COSMO-7 better Intensity threshold (mm/3h)

  38. D-PHASE: August 2007Intensity Scale score (preliminary), 3h accumulation COSMO-2 COSMO-7 COSMO-DE COSMO-EU

  39. „Fuzzy“-type verification for 12 h forecasts (vv=06 till vv=18) starting at 00 UTC August 2007 (fraction skill score)

  40. In box of different size (what is the best size ?)alert warning areas (Emilia-Romagna) First simple approach: averaging QPF

  41. Sensitivity to box size and precipitation threshold Positive impact of larger box is more visible at higher precipitation thresholds

  42. Sensitivity to box size and precipitation threshold • Best result box = 0.5 deg ? (7 * 7 grid points …)

  43. Sensitivity to box size and precipitation threshold • Best result box = 0.5 deg ? (7 * 7 grid points …)

  44. Some preliminary conclusions • QPF spatial averaging over box or alert areas produces a more usable QPF field for applications. Space-time localisation errors are minimised • Box or alert areas with size of 5-6 times the grid resolution gives the best results • Positive impact of larger box is more visible at higher precipitation thresholds • The gain of HRLam with respect to GCMs is greater for high thresholds and for precipitation maxima • Better results increasing time averaging (problems with 6 hours accumulation period, much better with 24 hours cumulated period !

  45. 1999-10-25 (Case L)Temporal radius obs rt=1 rt=3 rt=6

  46. 1999-10-25 (Case L)Spatial radius obs rxy=5 rxy=10 rxy=15

  47. Italian COSMO Models implementations cross-verifications

  48. Comparison between COSMO-ME and COSMO-IT (with upscaling)

  49. Verification of very high resolution (precipitation)« Optimal » scale: 0.5° : 50 km 5 x grid (7km) : 35 km 30 x 2.2 km: 70 kmSome signals that 2 km models better than 7 kmI would like to generate smothed productsMaterial starts to be collected: MAP D-PHASE, 2km modelsWork has to continueExchange of experience with other consortia

  50. Verification of COSMO-LEPS and coupling with a hydrologic model André Walser1) and Simon Jaun2) 1)MeteoSwiss 2)Institute for Atmospheric and Climate Science, ETH

More Related