1 / 60

PM Model Performance Goals and Criteria

PM Model Performance Goals and Criteria. James W. Boylan Georgia Department of Natural Resources - VISTAS National RPO Modeling Meeting Denver, CO May 26, 2004. Outline. Standard Bias and Error Calculations Proposed PM Model Performance Goals and Criteria

zaynah
Download Presentation

PM Model Performance Goals and Criteria

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PM Model Performance Goals and Criteria James W. Boylan Georgia Department of Natural Resources - VISTAS National RPO Modeling Meeting Denver, CO May 26, 2004

  2. Outline • Standard Bias and Error Calculations • Proposed PM Model Performance Goals and Criteria • Evaluation of Eight PM Modeling Studies Using Proposed Goals and Criteria • Discussion: Should EPA recommend PM Model Performance Goals and Criteria in PM Modeling Guidance Document?

  3. PM Model Evaluations • Air Quality Modeling and Ambient Measurements are two different ways to estimate actual ambient concentrations of pollutants in the atmosphere • Both modeling and measurements have some degree of uncertainty • Measurements should not be considered the absolute truth • Large differences between monitoring networks due to sampling and analysis techniques • Normalized bias and error calculations should not be normalized by observations, but rather the average of the model and the observations.

  4. Performance Metrics • Mean Normalized Bias and Error • Usually associated with observation-based minimum threshold • Some components of PM can be very small making it difficult to set a reasonable minimum threshold value without excluding a majority of the data points • Without a minimum threshold, very large normalized biases and errors can result when observations are close to zero even though the absolute biases and errors are very small • A few data points can dominate the metric • Overestimations are weighted more than equivalent underestimations • Assumes observations are absolute truth

  5. Performance Metrics • Normalized Mean Bias and Error • Biased towards overestimations • Assumes observations are absolute truth • Mean Fractional Bias and Error • Bounds maximum bias and error • Symmetric: gives equal weight to underestimations and overestimations • Normalized by average of observation and model

  6. Example Calculations • Mean Normalized Bias and Error • Most biased and least useful of the three metrics • Normalized Mean Bias and Error • Mean Fractional Bias and Error • Least biased and most useful of the three metrics

  7. PM Goals and Criteria • Performance Goals: Level of accuracy that is considered to be close to the best a model can be expected to achieve. • Performance Criteria: Level of accuracy that is considered to be acceptable for regulatory applications. • It has been suggested that we need different performance goals and criteria for: • Different Species • Different Seasons • Different Parts of the Country • 20% Haziest and 20% Cleanest Days • Answer: performance goals and criteria that vary as a function of concentration

  8. PM Modeling Studies Used for Performance Benchmarks • SAMI (GT) • July 1995 (URM/IMPROVE/variable grid) • July 1991 (URM /IMPROVE /variable grid) • May 1995 (URM /IMPROVE /variable grid) • May 1993 (URM /IMPROVE /variable grid) • March 1993 (URM /IMPROVE /variable grid) • February 1994 (URM /IMPROVE /variable grid) • VISTAS (UCR/AG/Environ) • July 1999 (CMAQ/IMPROVE/36 km) • July 1999 (CMAQ /IMPROVE/12 km) • July 2001 (CMAQ /IMPROVE/36 km) • July 2001 (CMAQ /IMPROVE/12 km) • January 2002 (CMAQ /IMPROVE/36 km) • January 2002 (CMAQ /IMPROVE/12 km)

  9. PM Modeling Studies Used for Performance Benchmarks • WRAP 309 (UCR/CEP/Environ) • January 1996 (CMAQ/IMPROVE/36 km) • February 1996 (CMAQ/IMPROVE/36 km) • March 1996 (CMAQ/IMPROVE/36 km) • April 1996 (CMAQ/IMPROVE/36 km) • May 1996 (CMAQ/IMPROVE/36 km) • June 1996 (CMAQ/IMPROVE/36 km) • July 1996 (CMAQ/IMPROVE/36 km) • August 1996 (CMAQ/IMPROVE/36 km) • September 1996 (CMAQ/IMPROVE/36 km) • October 1996 (CMAQ/IMPROVE/36 km) • November 1996 (CMAQ/IMPROVE/36 km) • December 1996 (CMAQ/IMPROVE/36 km)

  10. PM Modeling Studies Used for Performance Benchmarks • WRAP 308 (UCR/CEP/Environ) • Summer 2002 (CMAQ/IMPROVE/36 km/WRAP) • Summer 2002 (CMAQ/IMPROVE/36 km/US) • Winter 2002 (CMAQ/IMPROVE/36 km/WRAP) • Winter 2002 (CMAQ/IMPROVE/36 km/US) • EPA (Clear Skies) • Fall 1996 (REMSAD/IMPROVE/36 km) • Spring 1996 (REMSAD/IMPROVE/36 km) • Summer 1996 (REMSAD/IMPROVE/36 km) • Winter 1996 (REMSAD/IMPROVE/36 km)

  11. PM Modeling Studies Used for Performance Benchmarks • MANE-VU (GT) • July 2001 (CMAQ/IMPROVE/36 km) • July 2001 (CMAQ/SEARCH/36 km) • January 2002 (CMAQ/IMPROVE/36 km) • January 2002 (CMAQ/ SEARCH /36 km) • Midwest RPO • August 1999 (CMAQ/IMPROVE/36 km) • August 1999 (CAMx/IMPROVE/36 km) • August 1999 (REMSAD/IMPROVE/36 km) • January 2000 (CMAQ/IMPROVE/36 km) • January 2000(CAMx/IMPROVE/36 km) • January 2000(REMSAD/IMPROVE/36 km)

  12. PM Modeling Studies Used for Performance Benchmarks • EPRI (AER/TVA/Environ) • July 1999 (CMAQ/IMPROVE/32 km) • July 1999 (CMAQ/IMPROVE/8 km) • July 1999 (MADRID/IMPROVE/32 km) • July 1999 (MADRID /IMPROVE/8 km) • July 1999 (CAMx/IMPROVE/32 km)

  13. Mean Fractional Error

  14. Mean Fractional Bias

  15. Proposed PM Goals and Criteria • Based on MFE and MFB calculations • Vary as a function of species concentrations • Goals: MFE +50% and MFB  ±30% • Criteria: MFE +75% and MFB  ±60% • Less abundant species should have less stringent performance goals and criteria • Continuous functions with the features of: • Asymptotically approaching proposed goals and criteria when the mean of the observed and modeled concentrations are greater than 2.5 mg/m3 • Approaching +200% MFE and ±200% MFB when the mean of the observed and modeled concentrations are extremely small

  16. Proposed Goals and Criteria • Proposed PM Performance Goals • Proposed PM Performance Criteria

  17. MFE Goals and Criteria

  18. MFB Goals and Criteria

  19. Model Performance Zones • Zone I • Good Model Performance • Level I Diagnostic Evaluation (Minimal) • Zone II • Average Model Performance • Level II Diagnostic Evaluation (Standard) • Zone III • Poor Model Performance • Level III Diagnostic Evaluation (Extended) and Sensitivity Testing

  20. Mean Fractional Error

  21. Mean Fractional Bias

  22. Sulfate Mean Fractional Error

  23. Sulfate Mean Fractional Bias

  24. Nitrate Mean Fractional Error

  25. Nitrate Mean Fractional Bias

  26. Ammonium Mean Fractional Error

  27. Ammonium Mean Fractional Bias

  28. Organics Mean Fractional Error

  29. Organics Mean Fractional Bias

  30. EC Mean Fractional Error

  31. EC Mean Fractional Bias

  32. Soils Mean Fractional Error

  33. Soils Mean Fractional Bias

  34. PM2.5 Mean Fractional Error

  35. PM2.5 Mean Fractional Bias

  36. PM10 Mean Fractional Error

  37. PM10 Mean Fractional Bias

  38. CM Mean Fractional Error

  39. CM Mean Fractional Bias

  40. SAMI Mean Fractional Error

  41. SAMI Mean Fractional Bias

  42. SAMI Mean Fractional Error

  43. SAMI Mean Fractional Bias

  44. EPA Mean Fractional Error

  45. EPA Mean Fractional Bias

  46. VISTAS Mean Fractional Error

  47. VISTAS Mean Fractional Bias

  48. MANE-VU Mean Fractional Error

  49. MANE-VU Mean Fractional Bias

More Related