1 / 24

STAR EU research project: contributions to intercalibration

ECOSTAT WG 2A, JRC - Ispra (I), 7-8 July 2004. STAR EU research project: contributions to intercalibration Testing Option 2 of the Intercalibration Guidance at example stream types across Europe. Andrea Buffagni,Stefania Erba CNR - IRSA, Water Research Institute, Italy Sebastian Birk

kissner
Download Presentation

STAR EU research project: contributions to intercalibration

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ECOSTAT WG 2A, JRC - Ispra (I), 7-8 July 2004 STAR EU research project: contributions to intercalibration Testing Option 2 of the Intercalibration Guidance at example stream types across Europe Andrea Buffagni,Stefania Erba CNR - IRSA, Water Research Institute, Italy Sebastian Birk University of Essen, Germany STAR Project

  2. contributors to the work “behind” this presentation Andrea Buffagni, Stefania Erba, Marcello Cazzola, CNR-IRSA (Italy) John-Murray Blight, EA (UK) Mike Furse, CEH (UK) Sebastian Birk, Daniel Hering, Uni Essen (Germany) Hania Soszka, Agniewszka Kolada, EPA (Poland) Paulo Pinto, Uni Evora (Portugal) Helena Alves, INAG (Portugal) Non-STAR contributors Jean-Gabriel Wasson, CEMAGREF (France) Joao-Ana Bernardo, Uni Evora (Portugal) Manuel Toro Velasco, CEDEX (Spain)

  3. Main options for the IC process (see the Intercalibration Guidance) Option 1. Member States in a GIG area are using the same WFD assessment method. Consistency and Comparability equally guaranteed Option 2. Use of a Intercalibration Common Metrics (ICMs) method identified specifically for the purposes of the intercalibration exercise. Starting point: Comparability and Use of external datasets. Option 3.Direct comparison of national methods at intercalibration sites. Starting point: Consistency and No external data used

  4. For this pilot exercise to check Option 2 applicability, we are referring to: Rivers Macroinvertebrates Organic/nutrient pollution/general degradation We analized data from 7 different MSs, for 2 stream types, Mediterranean GIG, R-M1: France, Italy, Portugal, Spain Central GIG, R-C1: Italy, Poland, UK and Central GIG, R-C4: AQEM/STAR data

  5. Outline of Option 2: Use of a common metric(s) method identified specifically for the purposes of the intercalibration exercise

  6. In general terms, we have been testing (pilot): The suitability of common metrics (ICMs and ICM index) to describe the degradation gradient in different MSs and stream types (within and among GIGs)  the fitness with existing MS methods and classification If any differences exist among MSs quality classification as concerns such ICM index values If any differences exist between a MS quality classification and a “benchmarking” international classification as concerns the ICM index values If differences do exist, how big the effort to reduce them might be for MSs  is it possible and acceptable to apply Option 2 ?

  7. (BAC) Best Available Classification (BAC) to ensure Consistency and Comparability Outline of Option 2: Use of a common metric(s) method identified specifically for the purposes of the intercalibration exercise Step 1a: to select ICMs Step 1b: to check suitability of ICMs for MSs ecological gradients (BAC)

  8. common metrics Approach A: Harmonisation of national classification schemes using common metrics and a benchmark dataset Test dataset Benchmark dataset The Benchmark dataset can be used as an external system for trans-National analysis and comparison: Checking and selecting ICMs and a suitable ICM Index Comparing the National Test datasets to a common (external) dataset → quality classification (1 to 5) of each sample based on a Best Available Classification (BAC) Especially useful for Option 2; potentially useful to support Option 3 The Test datasets contain the data to be tested/harmonized Samples of the national monitoring program → quality classification (1 to 5) of each sample based on National Assessment and Classification Method Needed for Options 2; useful for Option 3 (extension, integration to IC sites network) Metrics (ICMs), that indicate man-made stress in different habitats (=types) provide comparable results across Europe/within GIGs Provide information consistent with WFD definitions → combined to Common Multimetric Index COMPARISON: Range of results of Common Multimetric Index per Quality Class

  9. The used Benchmark dataset High number of sites/samples Stricly WFD compliant

  10. Key results on ICMs and ICM Index: For both GIGs/stream types, it was possible to select ICMs well-performing across countries (and even across GIGs) The combination of such metrics into a ICM Index resulted in a well-performing tool to compare MSs assessment systems across Europe

  11. ICM index values at sites classified according to the Best Available Classification (strictly WFD compliant): an example from 4 Italian stream types (Med and Central GIGs together)

  12. Approach A – ICM Index suitability and Comparison of MSs classifications : An example for R-M1 (data from Med GIG) R-M1 stream type: French (IBGN) and Italian (IBE) data High/Good National boundary Good/Moderate National boundaries

  13. Approach B - Direct comparison of MSs assessment systems and classification, WITHOUT using an ICM index: An example for R-C4 • > 80 samples • countries: Denmark, Germany, Sweden, United Kingdom • Assessment methods: • Saprobic Index (DE) • Danish Stream Fauna Index (DK, SE) • Average Score Per Taxon (UK, SE) • national classification systems and • national reference values to calculate EQRs • possible use of a benchmark dataset

  14. Actual situation: An example for R-C4 R - C4 medium, lowland, mixed method A (country 1) method C (country 3) method C (country 4) method B (country 4) method B (country 2) Share of quality classes per method (n = 83)

  15. equal: 48 samples method A (country 1) higher:30 samples method B (country 2) higher:5 samples R - C4 medium, lowland, mixed method B (country 2) n = 83, rs = 0.80 method A (country 1) method A (country 1) - method B (country 2)

  16. Outline of Option 2: Use of a common metric(s) method identified specifically for the purposes of the intercalibration exercise Step 1a,b: to select and check ICMs  done (BAC) Step 2: to set agreed boundaries for the ICM Index  done (example) IC sites acceptance  preliminary Step 3: to compare ICMs agreed boundaries to National boundaries

  17. Comparison of test data to benchmark data The ICM index was calculated on the standard monitoring samples (Test dataset). The median values of the ICMs obtained in the test and benchmark datasets for classes High and Good were compared Test dataset Benchmark dataset High/Good boundary: p=0.01545 *; Good/Moderate boundary : p=0.01799 * (Mann-Whitney U test)

  18. To harmonize National boundaries to the benchmark dataset classes = to reduce the statistical difference between ICM index for High and Good classes  The threshold High/Good of the National methods was shifted (step by step procedure), until no more significant differences were found. Good status after before after before Harmonized boundaries High status

  19. Comparison of test data to benchmark data High status re-setting: Sites with the lowest values (in the test dataset) are moved to the Good status class, until no more differences are observed between the two datasets for ICM index values Good status re-setting: same procedure, sites moved to Moderate status Harmonized National method Benchmark dataset Test dataset High/Good boundary: p=0.1718 NS; Good/Moderate boundary : p=0.9903 NS. (Mann-Whitney U test)

  20. Test dataset – Site classification according to actual boundaries of National method and to harmonized boundaries 57 samples 15 samples 361 samples

  21. Step 3: to compare ICMs agreed boundaries to National boundaries Key results on statistical comparisons of MSs Test datasets to Benchmark dataset: In 2 cases out of 5 Test Countries/stream types, no differences for the High/Good boundary were observed In 3 cases out of 5 Test Countries/stream types, no differences for the Good/Moderate boundary were observed When differences were observed, they might be usually adjusted by very minor National systems modifications

  22. Outline of Option 2: Use of a common metric(s) method identified specifically for the purposes of the intercalibration exercise Step 1a,b: to select and check ICMs  done (BAC) Step 2: to set agreed boundaries for the ICM Index  done (example) IC sites acceptance  done Step 3: to compare ICMs agreed boundaries to National boundaries  done Step 4: adjust National boundaries Step 4: accept National boundaries

  23. What about Hybrid Options? e.g. To use the ICM Index approach (Option 2) for comparing existing National boundaries (Option 3)  tested: applicable To use the ICM Index approach (Option 2)for selecting IC (flag) sites (Option 3)  tested: applicable To use international datasets for bechmarking (Option 2) and an ICM Index for harmonization (Option 2)  tested: applicable To use international datasets for bechmarking (Option 2) and each MS method for comparing existing National boundaries (Option 3)  being tested soon..

  24. Thank you for your attention

More Related