1 / 53

WRAP 2002 Visibility Modeling: Emission, Meteorology Inputs and CMAQ Performance Evaluation

WRAP 2002 Visibility Modeling: Emission, Meteorology Inputs and CMAQ Performance Evaluation. Gail Tonnesen, Bo Wang, Chao-Jung Chien, Zion Wang, Mohammad Omary University of California, Riverside Zac Adelman, Andy Holland University of North Carolina Ralph Morris et al.

alijah
Download Presentation

WRAP 2002 Visibility Modeling: Emission, Meteorology Inputs and CMAQ Performance Evaluation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. WRAP 2002 Visibility Modeling:Emission, Meteorology Inputs andCMAQ Performance Evaluation Gail Tonnesen, Bo Wang, Chao-Jung Chien, Zion Wang, Mohammad Omary University of California, Riverside Zac Adelman, Andy Holland University of North Carolina Ralph Morris et al. ENVIRON Corporation Int., Novato, CA WRAP Attribution of Haze Meeting, Denver, CO July 22, 2004

  2. Annual MM5 Simulations run at the RMC Emissions processed with SMOKE Preliminary 2002 Scenario C used here. CMAQ version 4.3 (released October 2003) Data summaries, QA, results are posted on the RMC web page: www.cert.ucr.edu/aqm/308 Summary of RMC 2002 Modeling

  3. MM5 Modeling Domain (36 & 12 km) • National RPO grid • Lambert conic Projection • Center: -97o, 40o • True lat: 33o, 45o • MM5 domain • 36 km: (165, 129, 34) • 12 km: (220, 199, 34) • 24-category USGS data • 36 km: 10 min. (~19 km) • 12 km: 5 min. (~9 km)

  4. MM5 Physics

  5. Subdomains for 36/12-km Model Evaluation 1 = Pacific NW 2 = SW 3 = North 4 = Desert SW 5 = CenrapN 6 = CenrapS 7 = Great Lakes 8 = Ohio Valley 9 = SE 10 = NE 11 = MidAtlantic

  6. Evaluation Review • Evaluation Methodology • Synoptic Evaluation • Statistical Evaluation using METSTAT and surface data • WS, WD, T, RH • Evaluation against upper-air obs • Statistics: • Absolute Bias and Error, RMSE, IOA (Index of Agreement) • Evaluation Datasets: • NCAR dataset ds472 airport surface met observations • Twice-Daily Upper-Air Profile Obs (~120 in US) • Temperature • Moisture

  7. METSTAT Evaluation Package • Statistics: • Absolute Bias and Error, RMSE, IOA • Daily and, where appropriate, hourly evaluation • Statistical Performance Benchmarks • Based on an analysis of > 30 MM5 and RAMS runs • Not meant as a pass/fail test, but to put modeling results into perspective

  8. Evaluation of 36-km WRAP MM5 Results • Model performed reasonably well for eastern subdomains, but not the west (WRAP region) • General cool moist bias in Western US • Difficulty with resolving Western US orography? • May get better performance with higher resolution • Pleim-Xiu scheme optimized more for eastern US? • More optimization needed for desert and rocky ground? • MM5 performs better in winter than in summer • Weaker forcing in summer • July 2002 Desert SW subdomain exhibits low temperature and high humidity bias 2002 MM5 Model Evaluation 12 vs. 36 km Results Chris Emery, Yiqin Jia, Sue Kemball-Cook, and Ralph Morris (ENVIRON International Corporation) & Zion Wang (UCR CE-CERT), Western Regional Air Partnership (WRAP) National RPO Meeting, May 25, 2004

  9. WRAP 36km/12km July Wind Performance Comparison 120 100 80 Wind Direction Error (degrees) 60 PacNW SW DesertSW 40 North 20 0 0 0.5 1 1.5 2 2.5 3 3.5 Wind Speed RMSE (m/s) Benchmark 12 km Subdomains MM5/RAMS Runs 36 km Subdomains

  10. The RMC is continuing to test alternative MM5 configurations – to be completed at the end of 2004. Expect some reduction in bias &error in the WRAP states, however even in the best case we will have error & bias in MM5 that must be considered when using CMAQ for source attribution. MM5 Implications for AoH

  11. Preliminary 2002 Scenario C based on the 1996 NEI, grown to 2002, with many updates by WRAP contractors and other RPOs. Processed for CMAQ using SMOKE. Extensive QA plots on the web page Both SMOKE QA and post-SMOKE QA Emissions Inventory Summary

  12. Emissions Sources by Category & RPO

  13. WRAP 2002 Annual NOx Emissions Area Biogenic On Road Non Road Road Dust Point Rx Fire Ag Fire Wildfire Offshore

  14. 2002 WRAP NOx Emissions by Source & State Ag Fire 1400000 Rx Fire 1200000 Wildfire 1000000 Area [Tons/Yr] Point 800000 Nonroad 600000 Onroad 400000 200000 Utah Idaho 0 Oregon Nevada Arizona Montana Wyoming Colorado California Washington New Mexico North Dakota South Dakota

  15. WRAP 2002 Annual SO2 Emissions Area Biogenic On Road Non Road Road Dust Point Rx Fire Ag Fire Wildfire Offshore

  16. Onroad 2002 WRAP SO2 Emissions by Source & State Ag Fire 3.00E+05 Rx Fire Wildfire 2.50E+05 Area Nonroad 2.00E+05 Point [Tons/Yr] 1.50E+05 1.00E+05 5.00E+04 0.00E+00 Utah Idaho Oregon Nevada Arizona Montana Wyoming Colorado California Washington New Mexico North Dakota South Dakota

  17. 2002 WRAP NH3 Emissions by Source Category 2.50E+05 Nonroad Ag Fire 2.00E+05 Rx Fire Point 1.50E+05 Onroad Tons/Yr Wildfire Area 1.00E+05 5.00E+04 0.00E+00 Nevada Utah Idaho Oregon Arizona Montana Wyoming Colorado California Washington New Mexico North Dakota South Dakota

  18. Preliminary 2002 EI Used here. Updates for final 2002 EI will include: New EI data from other RPOs and Canada 2002 NEI to replace grown 1996 NEI Reprocess in SMOKE with final MM5 All final inputs ready now except Canada & MM5 Emissions Summary

  19. CMAQ v4.3 36-km grid, 112x148x19 Annual Run CB4 chemistry Evaluated using: IMPROVE, CASTNet, NADP, STN, AIR/AQS CMAQ Simulations

  20. Guidance from EPA not yet ready: Difficult to assert that model is adequate. Therefore, we use a variety of ad hoc performance goals and benchmarks to display CMAQ results. We completed a variety of analyses: Compute over 20 performance metrics Scatter-plots & time-series plots Soccer plots Bugle plots PM Performance Criteria

  21. We completed a variety of analyses: Compute over 20 performance metrics Scatter-plots & time-series plots Soccer plots Bugle plots Goal is to decide whether we have enough confidence to use the model for AoH: Is this a valid application of the model? Goal of Model Evaluation

  22. Plot error as as a function of bias. Ad hoc performance goal: 15% bias, 35% error based on O3 modeling goals. Larger error & bias are observed among different PM data methods and monitoring networks. Performance benchmark: 30% bias, 70% error (2x performance goals) PM models can achieve this level in many cases. Soccer Goal Plots

  23. Annual CMAQ vs IMPROVE

  24. Spring Summer Fall Winter

  25. Annual CMAQ vs CASTNet

  26. Spring Summer Fall Winter

  27. Annual CMAQ vs STN

  28. Spring Summer Fall Winter

  29. Annual CMAQ vs NADP

  30. Spring Summer Fall Winter

  31. Performance Goals and Criteria- Proposed by Jim Boylan • Based on FE and FB calculations • Vary as a function of species concentrations • Goals: FE +50% and FB  ±30% • Criteria: FE +75% and FB  ±60% • Less abundant species should have less stringent performance goals and criteria

  32. Performance Goals and Criteria- Proposed by Jim Boylan • PM Performance Goals • Proposed PM Performance Criteria

  33. Monthly SO4 Fractional Bias

  34. Monthly SO4 Fractional Error

  35. Monthly NO3 Fractional Bias

  36. Monthly NO3 Fractional Error

  37. Monthly NH4 Fractional Bias

  38. Monthly NH4 Fractional Error

  39. Monthly OC Fractional Bias

  40. Monthly OC Fractional Error

  41. Monthly EC Fractional Bias

  42. Monthly EC Fractional Error

  43. Monthly PM25 Fractional Bias

  44. Monthly PM25 Fractional Error

  45. TSSA results are run in CMAQ v4.4 with emissions version Preliminary 2002 C Performance evaluation used CMAQ 4.3 Previous CMAQ runs used CMAQ 4.3 with Preliminary 2002 B emissions (no fires) CMAQ & EI Versions

  46. CMAQ v4.3 & v4.4 versus IMPROVE July

  47. CMAQ v4.3 Mean fractional bias (no filter) January +25% MFB July –20% mean MFB Slightly worse January O3 performance in v4.4 CMAQ Ozone Performance

More Related