1 / 13

SRNWP – Revised Verification Proposal

SRNWP – Revised Verification Proposal. Clive Wilson Presented by Terry Davies at SRNWP Meeting October 8-11, 2007. Original Draft Proposal (not submitted to EUMETNET). 2 aims Development of a common verification package Realization of an operational model intercomparison Further aims

peers
Download Presentation

SRNWP – Revised Verification Proposal

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SRNWP – Revised Verification Proposal Clive Wilson Presented by Terry Davies at SRNWP Meeting October 8-11, 2007

  2. Original Draft Proposal (not submitted to EUMETNET) • 2 aims • Development of a common verification package • Realization of an operational model intercomparison • Further aims • Provide new methods (fuzzy etc) • Allow non-GTS observation data • Radar composites (esp. OPERA) • Responsible Member would: • Write & maintain code of package • Compute intercomparison scores, website & archive • Find NMS(or ECMWF) to host non-GTS data hub • Motivate NMSs to contribute non-GTS data to hub

  3. Not submitted because: • Not fully agreed • Too ambitious- underestimate time/effort needed to develop new code & package • No one indicated wish to be responsible member • Major centres and consortia already had most of proposed functionality in own packages • EUMETNET reluctant to agree proposed cost • (1FTE scientist + travel +25% programme manager)

  4. New (draft) proposal – staged aims • Initiate a “realistic” intercomparison based on • Exchange of forecasts from main models at 3-4 centres (format – GRIB then interoperability to define) • Met Office NAE - 12km • Hirlam reference - 15km • Aladin France - 10km • COSMO-EU - 7km • Use existing packages • Accept different station selection, QC (difficult to mandate/change at op. centres) • Verify common scores for same parameters over common areas • Compare, contrast & pool results to reach “consensus” • Extension of existing precip. verification done by Met Office

  5. Current Intercomparison of precipitation forecasts

  6. External Met Office website- European Precipitation comparison • http://www.metoffice.gov.uk/research/nwp/numerical/precipitation/emip.html • Password protected • Models: • Hirlam reference • COSMO-EU (DWD lokall) • Aladin (MeteoFrance) • UM – North Atlantic European • COSMO-7 (MeteoSwiss) • Verified against British Isles Nimrod radar composite

  7. External Met Office website- European Precipitation comparison • 24h accumulations thresholds : • 0.125, 0.25, 0.5, 1.0, 2.0, 4.0, 8.0, 12.0, 16.0, 20.0, 24.0, 32.0 and 48.0 mm • Scores • frequency bias • Equitable Threat Score (ETS) • log-odds ratio • Extreme Dependency Score • 00 UTC run of each model • since the beginning of January 2004 • Means, time series & ascii contingency tables

  8. Some Example results

  9. Latest results

  10. New (draft) proposal – later staged aims • Add more models/configurations • Add higher resolution forecasts to intercomparison • Methods/code for high resolution forecasts • Collaborate on investigation of new methods • Intercomparison studies for set of forecasts from single model (cf NCAR project with WRF) • Provide code for new methods • Enable access to radar composites (OPERA) • Non-GTS data & hub • Much greater financial and staff resources

  11. Why new draft may be acceptable • Practical & pragmatic • Stage 1 does not involve large cost or require extensive code changes • Stage 1 addresses primary concern (of EUMETNET directors) for meaningful verification of operational models • Stage 2 – allows more centres to judge their models against others over common domains • Stage 3 addresses new challenge of high resolution – still active research topic • Stage 4&5 will be necessary to evaluate and assess future operational high resolution models

  12. Questions

More Related