1 / 23

U.S. EPA’s Evaluation of Long Range Transport Models

U.S. EPA’s Evaluation of Long Range Transport Models. Ralph E. Morris ENVIRON International Corporation A&WMA CPANS Conference University of Calgary April 23-24, 2012. ENVIRON’s work for USEPA LRT Model Evaluation Study. Mesoscale Model Interface (MMIF) tool

sef
Download Presentation

U.S. EPA’s Evaluation of Long Range Transport Models

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. U.S. EPA’s Evaluation of Long Range Transport Models Ralph E. Morris ENVIRON International Corporation A&WMA CPANS Conference University of Calgary April 23-24, 2012

  2. ENVIRON’s work for USEPA LRT Model Evaluation Study • Mesoscale Model Interface (MMIF) tool • Pass WRF/MM5 meteorological model output directly to CALPUFF with minimal modification (CALMET not needed) • Document evaluation of CALPUFF and other LRT dispersion models using tracer field study data • EPA performed all of the model simulations • Perform a Consequence Analysis analyzing CALPUFF and CAMx for LRT AQ and AQRV • Evaluate CALPUFF, SCICHEM and CAMx using atmospheric plume chemistry data

  3. MMIF Tool for CALPUFF Met Inputs • CALPUFF needs meteorological variables defined at grid cell centers • WRF uses an Arakawa C grid configuration • U and V winds defined on cell edges • Other met variables defined on cell centers • Impossible to pass-through winds directly to CALPUFF • Some level of interpolation is needed • MMIF elected to interpolate winds to cell centers Arakawa C

  4. Inert Tracer Experiment Evaluation • LRT transport and dispersion algorithms evaluated using inert tracer field experiments • Release known amount of inert tracer • Measure at downwind receptor sites • Evaluate LRT models ability to predict tracer concentrations • Examples of past tracer experiment evaluation studies • 1986 8-model study • 1990 Rocky Mountain Acid Deposition Model Assessment • 1998 EPA evaluates CALPUFF using 2 tracer experiments • 1998 IWAQM Phase II recommendations • 1994 European Tracer Experiment (ETEX) • ATMES = real time model evaluation • ATMES-II historical model evaluation

  5. USEPA LRT Tracer Test Evaluation • EPA evaluated CALPUFF using five tracer test field study measurements: • 1975 Savannah River Laboratory (SRL75) • 1980 Great Plains (GP80) • 1983 Cross-Appalachia Tracer Experiment (CAPTEX) • 1994 European Tracer Experiment (ETEX) • Compare CALPUFF GP80/SRL76 with1998 EPA study • Determine how advances in CALPUFF improve performance • For some experiments compare multiple LRT dispersion models: • CALPUFF; HYSPLIT; FLEXPART; SCIPUFF; CAMx; CALGRID

  6. GP80 and SRL 75 Tracer Experiments GP80 • Receptor arcs at 100 km and 600 km • Evaluate CALPUFF separately on each arc using different configurations • Compare with 1998 study SRL75 • One 100 km receptor arc • Evaluate CALPUFF using different configurations • Compare with 1998 study

  7. CAPTEX and ETEX CAPTEX (CTEX3 &CTEX5) • Many CALMET configurations • Evaluate CALMET & MM5 & CALPUFF • Evaluate LRT models using default configuration ETEX • Evaluate LRT models using default configuration “out-of-the-box” • Additional CAMx. HYSPLIT and CALPUFF sensitivity tests

  8. Number LRT Dispersion Model Runs

  9. Cross Appalachian Tracer Experiment (CAPTEX) • 5 tracer releases from Dayton, OH or Sudbury, Canada • CTEX3 – Oct 2, 1983 • CTEX5 – Oct 25-26, 1983 • Meteorological Evaluation • Evaluate MM5 and CALMET • Identify best performing CALMET settings • Used in to determine USEPA’s recommended CALMET settings • Six LRT Models Evaluated • CALPUFF & SCIPUFF • HYSPLIT & FLEXPART • CAMx & CALGRID

  10. CTEX3: CALMET Sensitivity Tests Evaluation • 31 CALMET Sens Tests : • MM5 @ 80, 36 & 12 km • How MM5 is used within CALMET • First Guess • Step 1 Winds • None • CALMET @ 18, 12 & 4 km • RMAX1/RMAX2 (OA) • A = 500/1000 • B = 100/200 • EPA Recommended • C = 10/100 • D = NOOBS • 3 MMIF Sensitivity Tests: • MM5 @ 36, 12 & 4 km • CALMETSTAT to evaluate wind speed and direction • Compare against common benchmarks used to interpret MM5/WRF evaluation

  11. WS & WD Bias/Error CALMET w/ 12 km MM5EXP4 & EXP6 = 12 & 4 km CALMETA,B,C,D: RMAX1/RMAX2 = 500/1000, 100/200, 10/100, 0/0 WS Bias WD Bias WS Error WD Error

  12. Conclusions CTEX3 CALMET Model Performance • CTEX3 MM5 and CALMET wind model performance: • The EPA-recommended settings of RMAX1/RMAX2 (100/200, “B” Series) produced best wind model performance. • Use of 4 km grid resolution in CALMET tended to produce better wind model performance than 12 or 18 km. • Can’t say anything about finer grid resolution than 4 km. • The CALMET wind model performance was better using the 12 and 36 km MM5 data as input than using the 80 km MM4 data. • CTEX5 also found RMAX1/RMAX2 = 100/200 best wind performance • Note: Not an independent evaluation since WS/WD obs were also used as input to most CALMET tests

  13. CTEX3: CALPUFF/CALMET Sens Tests • CALPUFF evaluation using 25 CALMET sensitivity tests • CALPUFF runs for CTEX3 tracer release • Evaluate CALPUFF using ATMES-II 12 statistical performance metrics • Determine which CALMET configuration results in best CALPUFF performance • CALPUFF evaluation using 3 MMIF met inputs • Evaluate using ATMES-II statistical performance metrics: • Spatial, Temporal & Global • Bias and Error • Scatter • Correlation • Cumulative Distribution

  14. Two Composite Statistics for Ranking Models • RANK combines statistics for correlation (R2), bias (FB), spatial (FMS) and cumulative distribution (KS) with perfect model giving score of 4.0 • RANK statistics needs to be revised, propose replace FB w/ NMSE • AVERAGE averages the N model’s rankings from 1 to N across the 11 ATMES-II statistics with the best model being model with lowest score closest to 1.0 • 1.0 = model is highest ranked model across all 11 ATMES-II statistical metrics 14

  15. CALPUFF MPE using 12 km MM5EXP4 & EXP6 = 12 & 4 km CALMETA,B,C,D: RMAX1/RMAX2 = 500/1000, 100/200, 10/100, 0/0

  16. CTEX3 CALPUFF Evaluation Conclusions • The CALPUFF MMIF best performing for CTEX3 (worst for CTEX5). • The CALPUFF/CALMET RMAX1/RMAX2 = 100/200 worst performing RMAX1/RMAX2 configuration • Contrast to best performing CALMET configuration for WS/WD • CALPUFF/CALMET with higher MM5 resolution (36 and 12 km) performs better than using 80 km MM4 data. • Generally, CALPUFF/CALMET using 4 km resolution performs better

  17. CTEX3 Six LRT Model Evaluation • Use common 36 km MM5 meteorology in all models • Run 6 LRT models out-of-the-box using default configuration • Evaluate using ATMES-II statistical metrics • 6 LRT dispersion models evaluated: • Two Lagrangian Puff Models • CALMET (w/ MMIF, best performing for CTEX3) and SCIPUFF • Two Lagrangian Particle Models • FLEXPART and HYSPLIT • Two Eulerian grid models • CAMx and CALGRID

  18. CTEX3 Tracer Test LRT Model Evaluation(best performing model has lowest value) 18

  19. CTEX3 Tracer Test LRT Model Evaluation(best performing model has highest value)

  20. Conclusions of EPA LRT Evaluation (1 of 3) • GP80 Tracer Field Experiment • Using different valid CALMET configurations, the maximum CALPUFF concentrations vary by factor of 3 • Need standardized model configuration for regulatory modeling • Since less options, less variation using MMIF • CALPUFF “SLUG” near-field option needed to reproduce “good” model performance on 600 km arc from 1998 EPA study • Raises question of validity of 1998 EPA study since SLUG typically not used for LRT dispersion modeling • SRL75 Tracer Field Experiment • Fitted Gaussian plume evaluation approach can be flawed • The a priori assumption that the observed plume has a Gaussian plume distribution may not be valid at LRT distances

  21. Conclusions of EPA LRT Evaluation (2 of 3) • CAPTEX Tracer Field Experiment • RMAX1/RMAX2 = 100/200 (EPA recommendation) produces best CALMET WS/WD but worst CALPUFF tracer performance • EPA needs help from modeling community to identify regulatory CALMET settings • CALPUFF/MMIF performs better than CALPUFF/CALMET for CTEX3 but worst for CTEX5 • CTEX3: CAMx/SCIPUFF perform best followed by CALPUF/FLEXPART with HYSPLIT/CALGRID worst • CTEX5: CAMx/HYSPLIT performs best followed by SCIPUFF/FLEXPART with CALPUFF/CALGRID worst

  22. Conclusions of EPA LRT Evaluation (3 of 3) • ETEX Tracer Field Experiment • Objective was to evaluate models run with default configuration using common 36 km MM5 inputs • Exception was to use more all hours puff splitting in CALPUFF rather than default of once per day • CAMx, HYSPLIT & SCIPUFF perform best • FLEXPART & CALPUFF perform worst • Sensitivity tests for CALPUFF, CAMx and HYSPLIT • CALPUFF more aggressive puff splitting didn’t help • CAMx found default configuration best performing • HYSPLIT found some hybrid particle/puff configurations perform better than particle-only default

  23. Questions? • Full draft LRT tracer report available on EPA SCRAM website (consequence analysis & plume evaluation to come)

More Related