1 / 57

Agenda

Agenda. SG4 – Presentations on the Questionnaire. Short presentations (max 10 min) by: Ana Miranda (PT) David Carruthers (UK) Hans Backström (SE) Helge Olesen (DK) Marcus Hirtl (AT) Mihaela Mircea, Guido Pirovano (IT). Background. The benchmarking procedure. JRC.

osanna
Download Presentation

Agenda

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Agenda

  2. SG4 – Presentations on the Questionnaire • Short presentations (max 10 min) by: • Ana Miranda (PT) • David Carruthers (UK) • Hans Backström (SE) • Helge Olesen (DK) • Marcus Hirtl (AT) • Mihaela Mircea, Guido Pirovano (IT)

  3. Background

  4. The benchmarking procedure JRC USER Data Extraction Facility Model results DELTA BENCHMARKING service Model performance Evaluation reports

  5. Since the Oslo meeting (sep. 2010) • Document “ DELTA concepts” sent to SG4 participants (March 11) • Distribution of DELTA tool & utilities (about 20 users) • SG4 Web page created (http://aqm.jrc.it/DELTA) • Use of DELTA on different datasets (POMI, Madrid, London) • User feedback questionnaire

  6. The DELTA tool • Intended for rapid diagnostics by single users (at home) • Focus mostly on surface measurement-model pairs  “independence” of scale • Focus on AQD related pollutants on a yearly period (but AQ related input data also checked) • Exploration and benchmarking modes

  7. Agenda

  8. Outline • Content of the performance report • Links between Target and more “traditional” indicators (an analysis based on 3 datasets) • A “first-guess” for criteria/goals • Comparison with RDE • Observation uncertainty • Proposed update to the report template

  9. Contentof the performance report

  10. Contentof the performance report (1) Constraints • Should include a set of statistical indicators and diagrams complete enough to capture the main aspects of model performance but limited enough to fit in one page summary • Keep a similar template for all pollutants and spatial scales (but differences in terms of criteria/gals). • Restricted to AQD needs. Currently proposed for O3 8h daily max, NO2 hourly and PM10 daily. • Developed (at least first) for assessment purposes • Should include performances criteria and goals

  11. MEF < 0 Contentof the performance report (2) R=0.7 RMSE/σo OU Criteria: Acceptable performance for a given type of application (e.g. PM: MFE=75%, MFB=+/-60%) Goal: Best performance a model should aim to reach given its current capabilities (e.g. PM: MFE=50%, MFB=+/-30%)

  12. Checks on data availability for each stations • 75% for time averaging (e.g. 18h at least per day) • 90% available on total (e.g. >328 days/year) MFB=0.67 Contentof the performance report (3) 90% concept for indicators

  13. Links between Target and more “traditional” indicators (an analysis based on 3 datasets)

  14. Links between Target and more “traditional” indicators (1) R Target indicator = RMSE / SIgO Bias SigM/SigO CRMSE FAC2

  15. 61 monitoring sites suburban, urban and rural background 5 models: CHIMERE, TCAM, CAMX, RCG, MINNI Year:2005 Domain resolution:6x6km2 O3 – PM10 Examples on 3 datasets Po - Valley Madrid 10 monitoring sites urban background 1 model: WRF-CMAQ Year: 2007 Domain resolution: 1x1 km2 O3 – NO2 London 107 monitoring sites suburban/urban background, kerbside and roadside  1 model: ADMS Year: 2008 NO2 – O3 – PM10

  16. Links between Target and more “traditional” indicators (3) Methodology to fix “first guess” criterias • Based on real datasets, start by analysing how the bias criteria (MFB) proposed by Boylan and Russel (2005) compares to Target. • Fix a criteria for the target indicator which is consistent with the MFB criteria • Fix values for the other statistical indicators (R, StdDev ratio, FAC2) to be consistent with the assigned criteria on the Target value

  17. How to connect Target to more accessible indicators? (3) EXAMPLE 1: Po_valley (PM10) Crit Target=1 T: 58% RDE: 83%

  18. How to connect Target to more accessible indicators? (3) EXAMPLE 2: Po_valley (PM10) Crit Target=1 T: 32% RDE: 95%

  19. How to connect Target to more accessible indicators? (3) EXAMPLE 3: London (PM10) Crit Target=1 T: 96% RDE: 100%

  20. How to connect Target to more accessible indicators? (3) EXAMPLE 4: Po_valley (O3) Crit Target=0.8 T: 70% RDE: 96%

  21. How to connect Target to more accessible indicators? (3) EXAMPLE 5: Madrid (O3) Crit Target=0.8 T: 66% RDE: 100%

  22. How to connect Target to more accessible indicators? (3) EXAMPLE 6: London (NO2) Crit Target=1 T: 77% RDE: 94%

  23. How to connect Target to more accessible indicators? (3) EXAMPLE 7: Madrid (NO2) Crit Target=1 T: 60% RDE: 100%

  24. A “first-guess” for criteria/goals

  25. A “first-guess” for criteria/goals (1) • NOTE: Boylan and Russel MFB criteria • is proposed based on urban to regional scale modelling (from 4 to 36 km spatial resolution) • addresses only O3 and PM10

  26. A “first-guess” for criteria/goals (2) • Different criteria are currently proposed for O3-8h, PM10-daily and NO2-hourly. Although spatial-scale and time average dependency are possible, they are not considered up to now (point of discussion) • Scale is intended in terms of spatial resolution, linked to monitoring station type: • Regional  Rural background • Urban Urban & suburban background • Local  All urban stations (incl. roadside & kerbside) • Criteria probably need to be developed for yearly averaged values • Performance goals have arbitrarily been fixed to a 20% more stringent value • 3 datasets is not ENOUGH!

  27. How do these criteria compare to RDE?

  28. R Target PM10 MFB FAC2 RDE SM/SO

  29. R Target O3 MFB FAC2 RDE SM/SO

  30. R Target NO2 MFB FAC2 RDE SM/SO

  31. How these criteria compares to RDE? (2) Station Osio Sotto, POMI (NO2 - RCG) Target: 2.19 MFB: 73% FAC2: 41% R: 0.39 SigM/SigO: 1.49 RDE=11%

  32. How these criteria compares to RDE? (3) Station EA1, London (NO2 - ADMS) Target: 0.82 MFB: 8% FAC2: 89% R: 0.73 SigM/SigO: 1.14 RDE = 56%

  33. About observation uncertainty

  34. S0 RMSE/S0 About observation uncertainty (1) AQD

  35. About observation uncertainty (2) O3 NO2 PM10 ADMS, London, 2008

  36. Proposed update to the report template

  37. Proposed update to the report template ✘ ✘ ✘ ✘ SigO/SigM ✓ SigO > SigM SigO < SigM

  38. Agenda

  39. DELTA: Exploration mode (1) • Exploration: • Time selection (period, averaging time, season, day/night-time, max/min/mean) • Information overlay (models, scenarios, variables, stations) • Spatial analysis (color codes vs. 2D maps)

  40. DELTA: Exploration mode (2)

  41. DELTA: Exploration mode (3) Model V1 vs. Model V2 Upgrade

  42. DELTA: Exploration mode (4) Upgrade

  43. Agenda

  44. Agenda

  45. Agenda

  46. DELTA developments • Short term (Autumn 2011) • Flexible use of benchmarking mode and production of “pdf” or postscript reports • On-click mouse information • Windows/Linux portability • Station grouping mode • Longer term (2011-2012) • Inclusion of planning applications • Extension of benchmarking for annual averages (?) • Inclusion of PM2.5

  47. DELTA developments Model responses to emission reductions depend on the geographical location, the model scale, meteorological year… • Require a series of simulations with fixed emission reductions for main precursors (NOx, VOC, NH3, SO2, PPM) and analyze difference in behavior. Problem: • No observations available • Reference model ? • Joint exercices • Analysis of spatio-temporal emission patterns in provided data (e.g week vs. week-end day, DEFRA 2011) •  DELTA expl. Mode (Links with SG3)

  48. Agenda

  49. Case & Report DB DELTA Benchmark JRC USER Data Extraction Facility Model info Model results DELTA BENCHMARKING service Model performance Evaluation reports Deadline: end 2012

More Related