1 / 12

Current Status of the Development of the Local Ensemble Transform Kalman Filter at UMD

Current Status of the Development of the Local Ensemble Transform Kalman Filter at UMD. Istvan Szunyogh representing the UMD “Chaos-Weather” Group. Ensemble Data Assimilation Workshop, Balcones Springs, TX, April 9-12,. Theoretical Developments.

jeneil
Download Presentation

Current Status of the Development of the Local Ensemble Transform Kalman Filter at UMD

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Current Status of the Development of the Local Ensemble Transform Kalman Filter at UMD Istvan Szunyogh representing the UMD “Chaos-Weather” Group Ensemble Data Assimilation Workshop, Balcones Springs, TX, April 9-12,

  2. Theoretical Developments • Transition from LEKFto LETKF (Hunt 2005, submitted to Physica D). • Improved computational efficiency • More flexibility for non-local observation operators • More straightforward implementation of 4-D extension • Bias Correction • State space augmentation algorithms (Baek et al., 2006: Tellus, 58A, 293-306)

  3. Key Features of the LETKF • The analysis (ensemble) is obtained independently for each grid point--similarlyto the LEKF • The analysis is done in the space of all observations that are thought to be relevant for the estimation of the state at the given model grid point--in contrastto the LEKF, the definition of local state vectors and an SVD calculation on the state vectors are not needed • The LETKF (LEKF) is not a sequential assimilation scheme--in contrast to other flavors of EnKF

  4. Current Implementation on the NCEP GFS (I. Sz, E. Kostelich and G. Gyarmati et al.) • 3D implementation of LETKF: observations are assimilated from a 3-hour window centered at analysis time (no time interpolation is done) • 40-member ensemble, observations are selected from 7x7x7 grid-point local cubes, multiplicative variance inflation (25% in NH extra-tropics, 20% in tropics, and 15% in NH extra-tropics) • Quality control: if the observational increment is larger than 20hPa or 12K the observation is not assimilated; observations below sigma layer 2 are not assimilated • Correction of surface pressure observations: based on hydrostatic balance

  5. Difference Between the LETKF and Operational NCEP Temperature Analyses at 600 hPa There seems to be a negative correlation between the two analyses and the number of observation in the region; e.g., in North America the two analyses are very similar Based on 90 analysis cycles

  6. Verification • Based on 24-hour forecasts obtained with the T62, L28 resolution model • The benchmark analysis is obtained by assimilating all operationally assimilated observations, except for satellite radiances, with the operational NCEP SSI-- provided by NCEP (Y. Song, Z. Toth and R. Wobus) • The forecasts are verified against the operational NCEP analysis • This verification approach is more favorable for the benchmark; the results can be trusted most in regions where the operational system heavily rely on satellite radiances (e.g. SH)!

  7. Temperature “Error”--SH Extratropics The LETK is much more “accurate” at the upper model levels (above 300 hP) LETKF Benchmark

  8. Temperature “Error”-- Tropics The LETKF still has a clear advantage at the upper levels LETKF Benchmark

  9. Temperature “Error”-- NH Extratropics LETKF still has a small advantage at the top levels It is unclear at this point, whether the advantage of the benchmark in the troposphere is for real or just an artifact of the verification

  10. Timing Results2.6 MHz proc., 1Gbit Ethernet, ~260,000 observations • Preprocessing ~120s • Reading background files and observations • Calculating spectral transforms • Distributing grid-point values and observations • Analysis (parallel, 20 processors) ~50-300s • Data is distributed by latitudes (no load balancing) • Post-processing ~60s • Spectral transforms • Writing results • Total Wall ClockTime: ~10 minutes • We believe that further optimization could reduce the wall clock time by 50-75%

  11. Current Efforts • Code optimization • Load balancing (for a 40-member ensemble the computational cost is ~B x log(S), where B is the number of points at which an analysis is wanted and S is the number of observations needed for this--the load balancing strategy is based on balancing B x log(S) • Implementation of MPI • 4D extension • Our algorithm evolves both the background and background covariance in time (Hunt et al., 2004, Tellus 56A, 273-277) • Allows for implementing an internal digital filter (similar to that used in operational 4D-Var systems) • Bias Correction • Pre-Implementation Testing on the SPEEDY model (simple PE model by F. Molteni)

  12. Summary • Reasons to be optimistic • The system is already competitive with the operational 3D-Var in terms of accuracy and there are many straightforward improvements that can be done (4D handling of data, internal digital filtering, assimilation of radiances, bias correction, more sophisticated quality control and variance inflation, etc.) • High computational efficiency (probably sufficient for an operational implementation) and modest cost of adding new observations • The only potential problem • The increase of the number of available observations may significantly reduce the difference between the accuracy of a Kalman filter and the 3D-Var (the differences between the two systems are clearly smaller in regions of high observational density)

More Related