1 / 91

Gail Tonnesen, Bo Wang, Chao-Jung Chien, Zion Wang, Mohammad Omary

WRAP 2002 Visibility Modeling: Annual CMAQ Performance Evaluation using Preliminary 2002 version C Emissions. Gail Tonnesen, Bo Wang, Chao-Jung Chien, Zion Wang, Mohammad Omary University of California, Riverside Zac Adelman, Andy Holland University of North Carolina Ralph Morris et al.

Download Presentation

Gail Tonnesen, Bo Wang, Chao-Jung Chien, Zion Wang, Mohammad Omary

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. WRAP 2002 Visibility Modeling:Annual CMAQ Performance Evaluation using Preliminary 2002 version C Emissions Gail Tonnesen, Bo Wang, Chao-Jung Chien, Zion Wang, Mohammad Omary University of California, Riverside Zac Adelman, Andy Holland University of North Carolina Ralph Morris et al. ENVIRON Corporation Int., Novato, CA

  2. Annual MM5 Simulations run at the RMC in December 2003 (additional MM5 testing in progress) Emissions processed with SMOKE Preliminary 2002 Scenario C used here. CMAQ version 4.3 (released October 2003) Data summaries, QA, results are posted on the RMC web page: www.cert.ucr.edu/aqm/308 Summary of RMC 2002 Modeling

  3. MM5 Modeling Domain (36 & 12 km) • National RPO grid • Lambert conic Projection • Center: -97o, 40o • True lat: 33o, 45o • MM5 domain • 36 km: (165, 129, 34) • 12 km: (220, 199, 34) • 24-category USGS data • 36 km: 10 min. (~19 km) • 12 km: 5 min. (~9 km)

  4. MM5 Physics

  5. Subdomains for 36/12-km Model Evaluation 1 = Pacific NW 2 = SW 3 = North 4 = Desert SW 5 = CenrapN 6 = CenrapS 7 = Great Lakes 8 = Ohio Valley 9 = SE 10 = NE 11 = MidAtlantic

  6. Evaluation Review • Evaluation Methodology • Synoptic Evaluation • Statistical Evaluation using METSTAT and surface data • WS, WD, T, RH • Evaluation against upper-air obs • Statistics: • Absolute Bias and Error, RMSE, IOA (Index of Agreement) • Evaluation Datasets: • NCAR dataset ds472 airport surface met observations • Twice-Daily Upper-Air Profile Obs (~120 in US) • Temperature • Moisture

  7. METSTAT Evaluation Package • Statistics: • Absolute Bias and Error, RMSE, IOA • Daily and, where appropriate, hourly evaluation • Statistical Performance Benchmarks • Based on an analysis of > 30 MM5 and RAMS runs • Not meant as a pass/fail test, but to put modeling results into perspective

  8. Evaluation of 36-km WRAP MM5 Results • Model performed reasonably well for eastern subdomains, but not the west (WRAP region) • General cool moist bias in Western US • Difficulty with resolving Western US orography? • May get better performance with higher resolution • Pleim-Xiu scheme optimized more for eastern US? • More optimization needed for desert and rocky ground? • MM5 performs better in winter than in summer • Weaker forcing in summer • July 2002 Desert SW subdomain exhibits low temperature and high humidity bias 2002 MM5 Model Evaluation 12 vs. 36 km Results Chris Emery, Yiqin Jia, Sue Kemball-Cook, and Ralph Morris (ENVIRON International Corporation) & Zion Wang (UCR CE-CERT), Western Regional Air Partnership (WRAP) National RPO Meeting, May 25, 2004

  9. WRAP 36km/12km July Wind Performance Comparison 120 100 80 Wind Direction Error (degrees) 60 PacNW SW DesertSW 40 North 20 0 0 0.5 1 1.5 2 2.5 3 3.5 Wind Speed RMSE (m/s) Benchmark 12 km Subdomains MM5/RAMS Runs 36 km Subdomains

  10. The RMC is continuing to test alternative MM5 configurations – to be completed at the end of 2004. Final MM5 results will be used with final 2002 emissions inventory, beginning early 2005. Additional MM5 Testing

  11. Preliminary 2002 Scenario C based on the 1996 NEI, grown to 2002, with many updates by WRAP contractors and other RPOs. Processed for CMAQ using SMOKE. Extensive QA plots on the web page Both SMOKE QA and post-SMOKE QA Emissions Inventory Summary

  12. Emissions Sources by Category & RPO

  13. WRAP 2002 Annual NOx Emissions Area Biogenic On Road Non Road Road Dust Point Rx Fire Ag Fire Wildfire Offshore

  14. 2002 WRAP NOx Emissions by Source & State Ag Fire 1400000 Rx Fire 1200000 Wildfire 1000000 Area [Tons/Yr] Point 800000 Nonroad 600000 Onroad 400000 200000 Utah Idaho 0 Oregon Nevada Arizona Montana Wyoming Colorado California Washington New Mexico North Dakota South Dakota

  15. WRAP 2002 Annual SO2 Emissions Area Biogenic On Road Non Road Road Dust Point Rx Fire Ag Fire Wildfire Offshore

  16. Onroad 2002 WRAP SO2 Emissions by Source & State Ag Fire 3.00E+05 Rx Fire Wildfire 2.50E+05 Area Nonroad 2.00E+05 Point [Tons/Yr] 1.50E+05 1.00E+05 5.00E+04 0.00E+00 Utah Idaho Oregon Nevada Arizona Montana Wyoming Colorado California Washington New Mexico North Dakota South Dakota

  17. 2002 WRAP NH3 Emissions by Source Category 2.50E+05 Nonroad Ag Fire 2.00E+05 Rx Fire Point 1.50E+05 Onroad Tons/Yr Wildfire Area 1.00E+05 5.00E+04 0.00E+00 Nevada Utah Idaho Oregon Arizona Montana Wyoming Colorado California Washington New Mexico North Dakota South Dakota

  18. Preliminary 2002 version C EI Used here. Next iteration is version D, will include: New EI data from other RPOs. New NH3 EI Fugitive Dust Model Final 2002 EI will include: 2002 NEI Reprocess in SMOKE using final MM5 Canada point source emissions. Emissions Summary

  19. CMAQ v4.3 36-km grid, 112x148x19 Annual Run CB4 chemistry Evaluated using: IMPROVE, CASTNet, NADP, STN, AIR/AQS BC from 2001 GEOS-CHEM global model (Jacob et al) CMAQ Simulations

  20. Guidance from EPA not yet ready: Difficult to assert that model is adequate. Therefore, we use a variety of ad hoc performance goals and benchmarks to display CMAQ results. PM Performance Criteria

  21. We completed a variety of analyses: Compute over 20 performance metrics Scatter-plots & time-series plots Soccer plots Bugle plots Goal is to decide whether we have enough confidence to use the model designing emissions control strategies: Is this a valid application of the model? Goal of Model Evaluation

  22. Plot error as as a function of bias. Ad hoc performance goal: 15% bias, 35% error based on O3 modeling goals. Too demanding for PM and clean for western conditions? Larger error & bias are observed can exist among different PM data methods and monitoring networks. Performance benchmark: 30% bias, 70% error (2x performance goals) PM models can achieve this level in many cases. Soccer Goal Plots

  23. SO4: negative bias in summer, and positive bias in winter, good performance in spring and fall. NO3:large negative bias in summer, large positive bias in winter, and small bias but large error in March and October. OC:large negative bias in summer, small positive bias in winter. EC: Good performance each month. Coarse Mass: generally large negative bias Soil: Small bias most months, except large positive bias in winter PM2.5 and PM10: CMAQ over predicts in winter, under predicts in summer, small bias in spring and fall. CMAQ vs. IMPROVE Summary

  24. CMAQ performance is better for CASTNet (longer averaging period helps) but has same trend as IMPROVE: over prediction in winter and under prediction in summer. SO4 & NO3: large negative bias in summer, large positive bias in winter. In summer both SO2 and SO4 are under predicted, in winter both are over predicted (thus problem is not in partitioning) Total Nitrate (NO3+HNO3) is much better than aerosol nitrate performance, probably reflects errors in sampling. CMAQ vs. CASTNet Summary

  25. NO3: Large negative bias each month. SO4: Negative bias in winter. EC: Positive bias in summer. Generally good performance for other species, within performance benchmarks. CMAQ vs. STN Summary

  26. CMAQ over predicts wet dep for SO4, NO3 and NH4. Generally small positive bias but large error terms. Largest positive bias is in summer (opposite of other networks) CMAQ vs. NADP Summary

  27. Annual Average metrics: CMAQ vs IMPROVE

  28. Spring Summer Fall Winter

  29. Annual Average Metrics: CMAQ vs CASTNet

  30. Spring Summer Fall Winter

  31. Annual CMAQ vs STN

  32. Spring Summer Fall Winter

  33. Annual CMAQ vs NADP

  34. Spring Summer Fall Winter

  35. WRAP 2002 CMAQ Pre02c RunMonthly Analysis

More Related