1 / 29

Skill of the National Air Quality Forecast System on the Metropolitan Scale: Philadelphia, Summer 2007

Skill of the National Air Quality Forecast System on the Metropolitan Scale: Philadelphia, Summer 2007. William F. Ryan Chad Bahrmann The Pennsylvania State University Amy Huff Battelle Memorial Institute Contact: Bill Ryan, wfr1@psu.edu. Philadelphia Metropolitan Area.

ramiro
Download Presentation

Skill of the National Air Quality Forecast System on the Metropolitan Scale: Philadelphia, Summer 2007

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Skill of the National Air Quality Forecast System on the Metropolitan Scale:Philadelphia, Summer 2007 William F. Ryan Chad Bahrmann The Pennsylvania State University Amy Huff Battelle Memorial Institute Contact: Bill Ryan, wfr1@psu.edu

  2. Philadelphia Metropolitan Area The Philadelphia forecast area contains 19 O3 monitors and includes portions of 4 states (PA, NJ, DE, MD). For this analysis we treat the maximum 8-hour average O3 concentration at any monitor as the “domain maximum”. Forecast for June 2 – September 1, 2007 (92 cases).

  3. Proxy Monitors Several counties are “under served” with respect to O3 monitors (see, dark blue X’s) Our hypothesis was that the forecast model would perform better if proxy monitors were “added” to the analysis. This would remove problems of plume placement, i.e., right [O3] but in the wrong place. Turns out no significant difference in domain maximum forecast model performance with proxy sites. x x x x x

  4. Getting and Processing NAQFS Data • NAQFS Grib2 files obtained from NCDC (Many, many thanks to Alan Hall @ NCDC). • A PERL script was used to process the Grib2 files: • MySQL database contained metadata for specific locations. • Freely available NWS NDFD GRIB2 Decoder (degrib) passed point coordinates.

  5. Processing NAM Data • A more complex process was used for met data from the NAM model due to number of variables and vertical levels. • Data obtained from NOMADS – the NOAA Operational Model Archive and Distribution System (http://nomads.ncdc.noaa.gov/) • Then, extracted using PERL and NCL (NCAR Command Language).

  6. Basic Forecast Accuracy Measures • For Forecast Domain Maximum 8-hour O3 • Bias: 4.8 ppbv • Median Absolute Error: 8.2 ppbv • Mean Absolute Error: 10.5 ppbv • Root mean square Error: 13.2 ppbv • This is roughly consistent with Summer, 2005 results (Huff, 2007)

  7. Forecast and Observed Domain Maximum Correlation Coefficient (r): 0.78 Best Fit Line: [O3]obs = 0.935*[O3]fc – 0.16 Explained Variance (r2): 0.62

  8. Comparison to Other Forecast Methods

  9. Forecast Error and Observed O3

  10. Forecast and Observed Time Series

  11. Mean Bias Corrections f* = f – C f* = “corrected” forecasted value f = value forecasted by NAQFS C = mean bias correction Hypothesis: NAQFS forecast error may be serially correlated, perhaps in response to persistent weather patterns. Forecast error may be reduced by correcting for “recent” error pattern. But how “recent”? p = number of days before today in running mean bias correction f = ozone value forecasted by NAQFS o = observed ozone value

  12. Running Mean Bias Correction NAQFS 1-day 2-day 3-day 5-day 7-day 14-day All Days

  13. Hit Rate: O3≥ 85 Observed and Forecast “Close” means Forecast of > 80 ppbv and Observed ≥ 85 NAQFS FCST STAT 1-day RB 2-day RB

  14. False Alarm Rate: ≥ 85 Forecast, Not Observed Close means Observed > 80 ppbv. NAQFS FCST STAT 1-day RB 2-day RB

  15. What Do False Alarms Look Like? • Thunderstorms in the afternoon. • Cloud cover from yesterday’s or overnight convection. • In three of five cases, convection well forecast by forecaster and statistical model adjusted well. • A few examples follow:

  16. False Alarm: July 10, 2007 Convection forms along trough in weakly forced synoptic pattern.

  17. False Alarm: July 29, 2007 Stationary front is focus for strong convection. Expert and statistical forecasts discounted NAQFS forecast.

  18. False Alarm: July 30, 2007 Scattered morning convection and cloud remnants throughout day. Expert and statistical forecasts were able to resolve this pattern.

  19. False Alarm: August 31, 2007 Again, stationary front with significant cloud cover.

  20. False Alarm: August 25, 2007 • Excessive Heat Warning on this day. High temperatures drove high NAQFS O3. All methods missed this one. • Cloud remnants lingered through mid-day. Locations south of BWI did see Code Orange.

  21. NAQFS: Strong Temperature Dependence

  22. Forecast Model Post-Processing • All forecast models contain errors or simply cannot resolve certain phenomena. • e.g., met model output is: pressure, moisture, wind and temperature averaged over a grid cell volume but what if we want to know: Will it be sunny at Point A? • Model Output Statistics (MOS) create a statistical relationship between model output variables and sensible weather.

  23. Application to Air Quality Models? • What is required for MOS? • Long time series of data • Model unchanged through period (“frozen”) • Neither requirement can be met by current AQ models. Both met and chemistry models (not to mention emissions) are constantly being developed and upgraded. • What to do instead?

  24. Model Output Calibration (MOC) • Determine error (here, 8-hour peak O3) [O3 ]fcst – [O3 ]obs = [O3]err • Find predictors that can “explain” as much of variation of error as possible. [O3]err ~ a + b1x1 + b2x2 + …… + bnxn - Use a variety of predictors in a linear regression model fit to error. • MOC corrected forecast [O3]MOC = [O3 ]fcst – [O3]err

  25. MOC Simple Test • Running bias results showed “synoptic” time scale (~ 2 days) gave best results. So can weather variables help to correct? • Use NAM (WRF) 24 and 30 hour forecast met variables for location northeast of PHL. • Tested variables with correlation > 0.3 and used forward stepwise regression. • Set of 5 predictors selected: RH boundary layer and aloft, Winds boundary layer and aloft, and mid level temperature.

  26. Can Linear Regression Fit Error? NAQFS error is near normally distributed. This allows chance for linear regression model to accurately explain variance in error. For simple test: 40% of variance explained

  27. Skill of Test MOC Forecasts Median AE: 5.6 ppbv Improvement of 32% over Operational NAQFS Skill in Categorical Forecasts (Hits and False Alarms of Code Orange) not much different from Operational NAQFS. But, in False Alarms cases, reduced error by an average of 10 ppbv.

  28. Further Research (Is Always Necessary) • Additional met variables: Here used only 24 and 30 hour WRF forecasts. • Shorter calibration times: Here used all days in summer. What if 15-45 day calibration time? • Other variables: Here used only met predictors, what if used O3 as predictor. • Combine with Running Bias Correction?

  29. Conclusions • The NAQFS, on metropolitan scale in PHL, is as good, or better, than other standard forecast techniques. • The major shortcoming of the NAQFS is false alarms in convectively active cases. • NAQFS results, especially in high O3 cases, can be improved by simple running bias correction techniques. • MOC techniques show promise – the simplest test gave improvements of 32% in MAE.

More Related