1 / 43

Framework for a Comparison of EEA and EPA Indicators

Framework for a Comparison of EEA and EPA Indicators. Heather Case and Jay Messer U.S. EPA Ispra, Italy January, 2006. Purposes. Propose a framework for a cooperative effort to conduct an in-depth comparison of EEA and EPA environmental indicators.

ira
Download Presentation

Framework for a Comparison of EEA and EPA Indicators

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Framework for a Comparison of EEA and EPA Indicators Heather Case and Jay Messer U.S. EPA Ispra, Italy January, 2006

  2. Purposes • Propose a framework for a cooperative effort to conduct an in-depth comparison of EEA and EPA environmental indicators. • Present examples of EPA indicators from the upcoming EPA Report on the Environment that illustrate the key comparison issues. • Set the stage for discussing additional issues that involve electronic augmentation and updating of indicators going forward (next presentation).

  3. EPA’s Report on the Environment Recent events and future directions

  4. Since we met in May 2005 • July 2005: Peer review of proposed indicators • Oct. 2005: Second peer review of indicators • newly proposed or significantly revised • Looking Ahead • January 2006: Posting of “final” indicators for • ROE 2007 on the internet • September 2006: Scientific peer review of • full ROE Technical Document • Spring 2007: Final Release of Technical Document Recent events & future directions

  5. Background • At Washington, DC meeting in May, 2005, we decided to pursue a comparison of EEA and EPA indicators (including scaling). • EPA Post-Doc, Ellen Natesan, developed a white-paper comparing EEA core indicators with indicators from EPA’s 2003 Draft Report on the Environment. • Since then, many indicators have been updated and regionalized, and new indicators have been added for the 2007 ROE.

  6. Propositions • Indicators may fundamentally differ because of purpose, criteria, etc. • Indicators may fundamentally differ because of monitoring design, methods, averaging period, scale, and reference points • To the extent that the indicators are transparent and reproducible, and the date well-documented and accessible, if two indicators are not fundamentally different, an opportunity exists to calibrate one indicator against the other.

  7. Overview of Proposed Criteria for Comparisons • Purpose of indicators • Indicator definition, criteria and “ground rules” • Monitoring design and data comparability • Quality assurance • Scaling • Data management and accessibility

  8. Purpose of Indicators • ROE indicators answer questions about the state of the environment over time (e.g., are ozone levels decreasing over time?) • Accountability indicators track the effectiveness of particular programs (e.g., are controls on mobile sources reducing ozone?) • Must be responsive to early actions • Must differentiate among causes • May involve cost-effectiveness

  9. Purpose of Indicators • Examples • What are the trends in outdoor air quality and their effects on human health and the environment? • Sulfur Dioxide Emissions • Ozone Injury to Forest Plants • What are the trends in extent and condition of fresh surface waters and their effects on human health and the environment? • Nitrogen and Phosphorus in Streams in Agricultural Watersheds • Benthic Invertebrates in Wadeable Streams

  10. Indicator Definition and Criteria • ROE Indicator – a numerical value derived from actual measurements of a pressure, ambient condition, exposure, or human health or ecological condition over a specified geographic domain, whose trends over time represent or draw attention to underlying trends in the condition of the environment.

  11. Indicator Types

  12. What is currently NOT included • Administrative indicators (government actions and responses to them) • Resource use • Economic and “sustainability” indicators

  13. ROE Indicator Criteria • The indicator is useful. It answers (or makes an important contribution to answering) a question in the ROE. • The indicator is objective. It is developed and presented in an accurate, clear, complete, and unbiased manner. • The indicator is transparent and reproducible. The specific data used and the specific assumptions, analytic methods, and statistical procedures employed are clearly stated

  14. ROE Indicator Criteria (cont.) • The underlying data are characterized by sound collection methodologies, data management systems to protect its integrity, and quality assurance procedures. • Data are available to describe changes or trends and the latest available data are timely. • The data are comparable across time and space, and representative of the target population. Trends depicted in the indicator accurately represent the underlying trends in the target population.

  15. ROE Indicator Modeling “Ground Rule” • A model may be used to calculate and indicator value based on a physical measurement that is not itself the indicator, as long as the physical value and the indicator are at the same hierarchical level. • Permissable: NOX emissions based on fuel consumption and an emissions factor • Not permissable: acid deposition based on SO2 emissions

  16. Monitoring design & data comparability • What is being measured? Are the methods equivalent? Is guidance available and being followed? • Where are the monitoring sites located? How were the locations chosen (e.g., purposive vs probability designs) • When are samples collected? What is the averaging period? • What are the reference points?

  17. Monitoring design & data comparability • What is being measured? Are the methods equivalent? Is guidance available and being followed?

  18. Examples • SO2 and VOC Emissions • Fuel Combustion: Power Generators - emissions from coal, gas, and oil-fired power plants required to use continuous emissions monitors (SO2 only) • Fuel Combustion: Other Sources -industrial, commercial, institutional and residential heaters and boilers not required to use CEMs – emissions factors and DOE Fuel use data • Other Industrial Processes – e.g., chemical production and petroleum refining – emissions factors, production data

  19. SO2 and VOC Emissions • On-road Vehicles – e.g. cars, trucks, buses, and motorcycles – FHWA mileage estimates and EPA’s MOBILE6 model • Non-road Vehicles and Engines – e.g., farm and construction equipment, lawnmowers, chainsaws, boats/ships, aircraft – EPA’s NONROAD model

  20. National Emissions Inventory • Conducted every three years • EPA develops some data (electricity generators) • States develop other data with guidance from EPA • EPA performs consistency checks • Methods evolve - only 1990 inventory fully reconciled to latest inventory year

  21. Monitoring design & data comparability • Where are the monitoring sites located? How were the locations chosen (e.g., purposive vs probability designs) • When are samples collected? What is the averaging period? • What is the reference point?

  22. Examples • Nitrogen and Phosphorus in streams • Nitrate in streams in agricultural watersheds • Nutrient Concentrations in wadeable streams

  23. Three possibilities • Section 305(b) of Clean Water Act - States • National Water Quality Assessment (NAWQA) – U.S. Geological Survey • Wadeable Streams Assessment (SWA) – EPA and States

  24. Section 305(b) of Clean Water Act • States determine (attainable) designated uses for each water body • Monitor against water quality standards appropriate for the designated use • Report to EPA every two years on percentage of water bodies that meet standards (possible indicator)

  25. Section 305(b) of Clean Water Act • Only a small fraction of water bodies assessed • Biases in designation of use and water bodies monitored • Standards and methods vary from state to state • Rejected as indicator in FY03 Draft ROE for failure to meet indicator criteria

  26. NAWQA Purposive design (50 watersheds) Sampled at many points in the watershed Sampled 12-13 times/year No reference levels WSA Probability design (1392 reaches) Sampled at one point on the reach Sampled once every 4 years (summer) Reference levels based on statistics from regional reference sites Nutrients in Streams

  27. Percent of Stream Sites EPA's drinking water standard is 10 ppm (Maximum Contaminant Level).

  28. NAWQA Better characterization of sampled streams and watersheds But Expensive Can’t be extrapolated to unsampled streams No confidence bounds for national estimates WSA Unbiased estimates of all wadeable streams, with known confidence Comparatively inexpensive But Poor characterization of individual reaches No data for extreme events or other seasons Nutrients in Streams

  29. Quality assurance • Are controls in place to insure that the data are of adequate and know quality? • Are the metadata available? • Links to QA Plans and metadata for ROE indicators in Indicator QA forms (Heather Case’s presentation)

  30. Scaling • What is the most disaggregated level at which the indicator is meaningful? • Is the reference level appropriate for the extent and grain size of the indicator? How important are episodes? • How sensitive is the indicator to the effects of a few very large entities?

  31. Scaling • What is the most disaggregated level at which the indicator is meaningful? • SO2 and VOC emissions • ROE07 - 10 EPA Regions • 3100 US counties (theoretically) • N&P in streams • ROE07 - national only • NAWQA – 50 predominantly agricultural watersheds • WSA – 10 EPA Regions (theoretically) or 9 ecoregions

  32. Scaling • What is the most disaggregated level at which the indicator is meaningful?

  33. Scaling • What is the most disaggregated level at which the indicator is meaningful?

  34. Scaling • Is the reference level appropriate for the extent and grain size of the indicator? How important are episodes? • Mean levels of toxic chemicals in a stream may not mean much if storm events do the damage • How sensitive is the indicator to the effects of a few very large entities? • A very small percentage of emitters may be responsible for a large fraction of total emissions – to the extent that they are concentrated in a few states or regions, they may skew national statistics

  35. Data management and accessibility • The key to transparency and reproducibility • All ROE indicators have • Data underlying the figures available in excel spreadsheets online • Links to parent databases • Some ROE indicators have • Links to datasets (or data in excel spreadsheets) that underlie the data supporting the figures.

  36. Conclusions • Indicators may fundamentally differ because of purpose, criteria, etc. • Indicators may fundamentally differ because of monitoring design, methods, averaging period, scale, and reference points • To the extent that the indicators are transparent and reproducible, and the date well-documented and accessible, if two indicators are not fundamentally different, an opportunity exists to calibrate one indicator against the other.

More Related