1 / 85

29 April 2005

Ensemble-Based Data Assimilation, Model Post-Processing, and Regional NWP Initial Results of the University of Washington CSTAR Project. 29 April 2005. Greg Hakim & Cliff Mass University of Washington. UW CSTAR. Leverages large regional prediction infrastructure.

barretts
Download Presentation

29 April 2005

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Ensemble-Based Data Assimilation, Model Post-Processing, and Regional NWPInitial Results of the University of Washington CSTAR Project 29 April 2005 Greg Hakim & Cliff MassUniversity of Washington

  2. UW CSTAR • Leverages large regional prediction infrastructure. • Built on long-term cooperation between UW and Northwest NWS offices and Western Region. • Broad goal: develop and test new prediction approaches applicable not only to the western U.S. but the entire country.

  3. UW CSTAR Specific Goals • Develop an EnKF data assimilation system. • Evaluate high-resolution ensemble forecasts. • Develop post-processing tools for grid-based model bias removal for use in IFPS. • Evaluate high-resolution prediction over the NW, including testing of WRF.

  4. UW CSTAR Specific Goals • Maintain & expand the Northwest regional data collection system. • new QC and mesoscale verification tools. • Real-time coupled atmos-hydro forecasting • including forcing by atmospheric ensembles. • Experimental EnKF-based Analysis of Record (AOR).

  5. UW Data Assimilation Research • Ensemble Kalman Filter (EnKF) • Ryan Torn thesis research. • Sebastien Dirren.

  6. model runs observation assimilation AnalysesForecastsAnalyses EnKF Basic Idea

  7. Gaussian Update analysis = background + weighted observations innovation: new obs information Kalman gain matrix

  8. Summary of Ensemble Kalman Filter (EnKF) Algorithm • Ensemble forecast provides background estimate & statistics (B) for new analyses. • Ensemble analysis with new observations. (3) Ensemble forecast to arbitrary future time.

  9. EnKF Pros & Cons Pros: Flow-dependent background-error covariance. Scale-independent approach. mesoscale, complex topography, cloud fields. • Cons: • Rank deficient ensembles. • covariance “inflation” & “localization” often used. • no help for near-field cross-variable covariance. • Calculation scales with number of observations.

  10. When is the EnKF Most Useful? • Sparse observations. • Mesoscale circulations. • Complex topography. • Little advantage over 3DVAR for: • dense observations. • synoptic/planetary-scale analyses. • conventional balances apply.

  11. Ensemble Covariance Examples

  12. 3D-VAR covariance ensemble covariance Temperature-Temperature Covariance

  13. Temperature-Wind Covariance 3D-VAR covariance ensemble covariance

  14. Mesoscale Example: cov(|V|, qrain)

  15. Limited-area EnKFs • Most EnKF development for global models. • Limited-area EnKFs require ensemble BCs. • Several options available (Torn et al. 2005). • Here we use random draws from N(0,B). • B = WRF 3DVAR covariance model. • Red noise process in time. • Perts centered on GFS 12- hour forecast. • Affects ~100 km wide region near lat. bndrs.

  16. UW Real-time EnKF • 90 member ensemble for analyses & forecasts. • 45 km grid over E. Pac & W. NOAM. • 6-hour assimilation cycle. • Observations assimilated (~5000-8000 per time): • Radiosondes. • ACARS. • Surface: ASOS, ships, buoys. • Cloud-track winds. • Availability ~t+3 hours. • 24-h forecasts: 00 & 12 UTC. • Available ~t+5.5 hours.

  17. U. Washington Real-time EnKF http://www.atmos.washington.edu/~enkf

  18. Probabilistic Analyses sea-level pressure 500 hPa height Large uncertainty associated with shortwave approaching in NW flow

  19. Ensemble Forecasts Analysis 24-hour forecast

  20. Ensemble Forecasts Analysis 24-hour forecast

  21. Sfc P 24 h Forecast Sensitivity

  22. Surface Pressure Verification Red = UW EnKF ensemble mean Green = UW EnKF ensemble members Blue = NCEP GFS

  23. 500 hPa Height Verification Red = UW EnKF ensemble mean Green = UW EnKF ensemble members Blue = NCEP GFS

  24. Sensitivity Example analysis SLP 850 hPa temp.

  25. SLP Climatological Sensitivity Sea-level pressure 500 hPa height

  26. 15 km Nested-Grid Experiment • Nested domain. • BCs from 45km EnKF. • Assimilation every 3 hours over 48 h period.

  27. Surface Winds

  28. Radar Reflectivity

  29. Reflectivity Analysis & Increment More precip over most of region; NW shift of primary band

  30. Current & Future Plans • Nested grids at higher resolution. AOR • 15 km & 5 km. • More frequent updates. • every 3-hours, then hourly. • also try “cheap” hourly updates. • Update key fields only; no propagation; available ~t+15 min. • Ensemble Kalman smoother. AOR • Use observations to update earlier analyses. • Satellite radiance assimilation.

  31. Grid-Based Model Bias Removal: We All Need It! Model biases are a reality We need to get rid of them

  32. Grid-Based Bias Removal • In the past, the NWS has attempted to remove these biases only at observation locations (MOS, Perfect Prog) • Removal of systemic model bias on forecast grids is needed: • All models have significant systematic biases • NWS and others want to distribute graphical forecasts on a grid (IFPS) • People and applications need accurate forecasts everywhere…not only at ASOS sites • Important post-processing step for ensembles

  33. How does one do it? • One cannot simply calculate the biases at observation locations and then spread them around with standard interpolation schemes. • Why? • Because you don’t want to spread the non-systematic biases particular to specific stations to their surroundings. • Because nearby stations might be of different elevation and land use…so even the systematic biases might differ.

  34. A Potential Solution: Observation-BasedGrid Based Bias Removal • Based on the biases of nearby observation sites. • Base the bias removal on observation-site land-use category, elevation, and proximity. Land use and elevation are key parameters the control physical biases. • Make use of parameter values to insure one only applies observation locations of similar regimes.

  35. Spatial differences in bias

  36. The Method • Calculate model biases at observation locations by interpolating model forecasts to observation sites. • Identify a land use, elevation, and lat-lon for each observation site. • Calculate biases at these stations hourly. Thus, one has a data-base of hourly biases. • For every forecast hour: At every forecast grid point search for nearby stations of similar land use and elevation and for which the previous forecast value is close to that forecast at the grid point in question. • E.g., if the forecast temperature was 60, only use biases for nearby stations of similar land-use/elevation associated with forecasts of 55-65. • Collect a sufficient number of these (using closest and most recent ones first) to average out local effects (roughly a half dozen). Average the biases for these sites and apply the bias correction to the forecast.

  37. Grid-Base Bias Removal Why is this approach good? • Takes out diurnal bias • Takes in consideration land use (e.g., land versus water and generally will only use stations from right side of mountains) • Does not spread representativeness error • Based on recent biases so adaptive for time of year and model changes • Only uses biases from similar regimes.

  38. Raw 12-h Forecast Bias-Corrected Forecast

  39. Sal Lake City

  40. Bozeman

  41. Next Steps • We are now operationally running grid-based bias removal for Northwest MM5 surface grids. • This month will begin shipping bias-corrected grids to NWS for testing in IFPS. • Testing nationally?

  42. WRF vs MM5 (VS Eta and GFS) • How Good is WRF over the Western U.S.? • How Does It Compare to MM5, Eta, and GFS? • Does it Offer Substantial Benefits?

  43. WRF vs MM5 (VS Eta and GFS) • Until this year, the UW verification system was evaluating MM5 (36-12-4 km), with NCEP Eta and GFS. • In January we began running WRF (ARW-core) at 36-12 km (twice a day to 48 h) to provide a detailed comparison between WRF and the other models over the complex terrain and land-water contrasts of the Pacific Northwest. • Takes advantage of the extensive verification system already in place.

  44. MM5 1K Winds WRF

  45. First Subjective Impressions • Low-level wind fields in terrain very similar. • This is also true of thermal fields. • WRF shows more structure in precipitation and tends to be more aggressive (although there are differences in microphysics schemes).

  46. Next Steps • Add WRF ARW to objective verification system (now in progress) to provide real numbers regarding performance. • Add WRF NMM run when available. • Transition from MM5 to WRF (best core) for regional NWP later this year if no negative impacts.

More Related