1 / 16

2002 MM5 Model Evaluation 12 vs. 36 km Results

2002 MM5 Model Evaluation 12 vs. 36 km Results. Chris Emery, Yiqin Jia, Sue Kemball-Cook, and Ralph Morris ENVIRON International Corporation Zion Wang UCR CE-CERT Western Regional Air Partnership (WRAP) Regional Modeling Center (RMC) National RPO Meeting May 25, 2004.

elam
Download Presentation

2002 MM5 Model Evaluation 12 vs. 36 km Results

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 2002 MM5 Model Evaluation12 vs. 36 km Results Chris Emery, Yiqin Jia, Sue Kemball-Cook, and Ralph Morris ENVIRON International Corporation Zion Wang UCR CE-CERT Western Regional Air Partnership (WRAP) Regional Modeling Center (RMC) National RPO Meeting May 25, 2004

  2. 2002 MM5 Evaluation Review • IA/WI 2002 MM5 Configuration on National RPO 36 km Grid, except: • Used MM5 v3.6.2 • Invoked Reisner II, disregarded INTERPX • Evaluation Methodology • Synoptic Evaluation • Statistical Evaluation using METSTAT and surface data • WS, WD, T, RH • Evaluation against upper-air obs • Compared statistical performance against EDAS, VISTAS

  3. METSTAT Evaluation Package • Statistics: • Absolute Bias and Error, RMSE, IOA • Daily and, where appropriate, hourly evaluation • Statistical Performance Benchmarks • Based on an analysis of > 30 MM5 and RAMS runs • Not meant as a pass/fail test, but to put modeling results into perspective

  4. Datasets for Met Evaluation • NCAR dataset ds472 airport surface met observations • Twice-Daily Upper-Air Profile Obs (~120 in US) • Temperature • Moisture • Scatter plots of performance metrics • Include box for benchmark • Include historical MM5/RAMS simulation results • WS RMSE vs. WD Gross Error • Temperature Bias vs. Temperature Error • Humidity Bias vs. Humidity Error

  5. Subdomains for Model Evaluation 1 = Pacific NW 2 = SW 3 = North 4 = Desert SW 5 = CenrapN 6 = CenrapS 7 = Great Lakes 8 = Ohio Valley 9 = SE 10 = NE 11 = MidAtlantic

  6. Evaluation of 36-km WRAP MM5 Results • Model performed reasonably well for eastern subdomains, but not the west (WRAP region) • General cool moist bias in Western US • Difficulty with resolving Western US orography? • May get better performance with higher resolution • Pleim-Xiu scheme optimized more for eastern US? • More optimization needed for desert and rocky ground? • MM5 performs better in winter than in summer • Weaker forcing in summer • July 2002 Desert SW subdomain exhibits low temperature and high humidity bias

  7. Comparison: EDAS vs. WRAP MM5 • Is it possible that 36-km MM5 biases may be caused by the analyses used to nudge (FDDA) the model? • We evaluated EDAS analysis fields to see whether biases exist • Used Metstat to look at the EDAS surface fields • Input EDAS fields do not have the cold moist bias seen in the 36 km MM5 simulation, but wind speed underestimation bias is present • Performance issues not due to EDAS analysis fields, must be internally generated by MM5

  8. Comparison: VISTAS vs. WRAP MM5 • Evaluate VISTAS 2002 MM5 simulation to see whether similar bias exists • Different configuration: KF II, Reisner I • Both MM5 simulations had trouble in western U.S. – same subdomains lie outside the statistical benchmarks • Both MM5 simulations performed better in winter than in summer

  9. Comparison: VISTAS vs. WRAP MM5 • VISTAS: • Better simulation of PBL temperature and humidity profiles • Less surface humidity bias in the western U.S. • Markedly better summer precipitation field • WRAP: • Less surface temperature bias than VISTAS during winter • Overall, VISTAS did better in the west • Further tests indicate use of KF II has larger effect on performance than Reisner I

  10. Addition of 12-km WRAP Grid • IC/BC’s extracted from 36-km MM5 fields • 3-D FDDA fields extracted from 36-km MM5 fields • Preliminary 5-day run starting 12Z July 1

  11. Comparison: 12 vs. 36-km WRAP MM5 • Performance scatter plots prepared • Directly compare 36-km statistics with 12-km statistics for each western sub-region • Provides mean stats over July 1-6 preliminary test period

  12. Comparison: 12 vs. 36-km WRAP MM5 • Results: • No significant or consistent impact on wind speed/direction performance • Temperature bias dramatically improved for all areas, but gross error is made worse • Impacts on humidity performance are minor, and worse in the Desert SW • There appear to be larger issues that 12-km grid resolution does not improve upon • Remember that all IC/BC and 3-D FDDA are derived from 36-km results • This issue addressed in 12-km sensitivity tests

More Related