1 / 34

Advanced Data Assimilation Methods

Advanced Data Assimilation Methods. 10 December 2012. Thematic Outline of Basic Concepts. What is 4D-Var, and how does it differ from and extend upon 3D-Var? What is an Extended Kalman Filter, and how does it extend upon least-squares minimization methods?

shel
Download Presentation

Advanced Data Assimilation Methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Advanced Data Assimilation Methods 10 December 2012

  2. Thematic Outline of Basic Concepts • What is 4D-Var, and how does it differ from and extend upon 3D-Var? • What is an Extended Kalman Filter, and how does it extend upon least-squares minimization methods? • What is an Ensemble Kalman Filter, and what are its strengths relative to other data assimilation methods?

  3. 4D-Var • Generalization of 3D-Var to allow for the continuous assimilation of all available observations over some assimilation interval. • Recall: cost function for 3D-Var… (observations, defining analysis increment) (background) multidimensional, each weighted by the inverse of their respective error covariance matrices

  4. 4D-Var • For 4D-Var, the cost function has a similar form… • This cost function is equivalent to 3D-Var if observations are valid only at a single time t0. (background valid at the start of the assimilation window) (assimilate m observations, one at a time)

  5. 4D-Var • Each observation is assimilated at the time that it is taken, thus the xi is the model state vector valid at the time of the observation. • Contrast to xt0, the model state vector valid at the start of the assimilation window. If all observations are at t = t0, then xi= xt0 (i.e., collapses to 3D-Var).

  6. 4D-Var • Minimization of the cost function proceeds similar to in 3D-Var, albeit with some differences… • Inherent temporal evolution: each observation time is associated with a new xi and Ri. • Each observation assimilation requires forward integration of the model from to to tm (a/k/a ti). • Post-assimilation, an adjoint operator must be used to backward integrate to finalize the updated analysis. • 4D-Var is thus more computationally expensive than is 3D-Var, limiting its operational acceptance.

  7. 4D-Var • Adjoint model: tangent linear version (M) of a non-linear model operator (M). • Defined as the transpose of the model operator (MiT) • Adjoint model is specific to the forecast model used for DA and to conduct the subsequent forecast simulations. • Thus, every time the model code is upgraded, the adjoint code must also be updated, not a trivial process!

  8. 4D-Var • Cost function minimization also requires that the sequence of model states xi (the modified forecast model state xf) provides valid solutions to the model. • In other words, there must be dynamical consistency between xi and the model dynamical equations. • Mathematically, (over all i, xi must be representative of the model forecast from t=0 to t=ti)

  9. 4D-Var

  10. 4D-Var • 3D-Var: minimize cost function to optimally combine background (dashed line) and observations (stars) at one assimilation time • 4D-Var: estimate the analysis state vector xa (at t=t0) producing a model solution M that minimizes the cost function. • Defines an initial condition that produces a forecast that minimizes total cost throughout assimilation window. • Cost function includes both the background (JB) and each observation (all of the JO terms at each time ti).

  11. 4D-Var • In the depicted example, the corrected and previous forecasts do not significantly differ at tn. • Note: the previous forecast is the one that defines the background at t0 and continues forward in time. • However, the details between t0 and tn differ, providing a better fit to the observations as manifest through the temporal evolution of the forecast.

  12. 4D-Var • Start: background estimate, nominally from a prior model forecast. • Assimilate observations at background time (3D-Var). • Integrate model forward to next observation time. • Assimilate observation to help correct forecast state.

  13. 4D-Var • Repeat the previous two steps over all observations. • Use adjoint model to go backward to the background time to complete assimilation. • Result: a “corrected” analysis and short-term forecast (~3- to ~6-h) evolution. • Implicit: balance condition; consistency with model.

  14. Extended Kalman Filter (EKF) • A specific application of least squares minimization that emphasizes time dependency in the calculation of the background error covariance matrix. • Since Pfoften comes from a short-range model forecast, it is sometimes referred to as a forecast error covariance matrix. Time-Independent B = background error covariance matrix A = analysis error covariance matrix Time-Dependent Pf = background error covariance matrix Pa = analysis error covariance matrix

  15. Extended Kalman Filter

  16. Extended Kalman Filter • First, obtain a first guess estimate by integrating the model forward from a previous analysis. • Next, compute the forecast error covariance matrix at t=t+1 for use when constructing the next analysis. M = Jacobian of M, i.e., ∂M/∂x Q = model forecast error covariance matrix at t=t

  17. Extended Kalman Filter • The model forecast error covariance matrix Q(t) is used as a starting point. • Added to this is the forward-propagation (in time) of the analysis error covariance matrix Pa(t) • Conducted using the tangent linear version of the model M and its adjoint MT. • Requires knowledge of Pa at the very first time; for later times, we are able to compute it (more shortly).

  18. Extended Kalman Filter • Forecast time t+1 now becomes the analysis time t. • Next, we compute the Kalman gain matrix K(t), the analog to the weighting matrix K used before. R = observation error covariance matrix B = time-independent background error covariance matrix Pf = time-dependent background error covariance matrix HHT = transformation to observation space H = Jacobian of forward observation operator H (∂H/∂x)

  19. Extended Kalman Filter • Both K(t) and K are identical apart from the time-dependency manifest via B versus Pf. • As before, they represent the ratio of the transformed background error covariance matrix to the total (obs + background) error covariance matrix. • Once K(t) has been computed, it is used as the weighting in obtaining a new analysis state estimate. • Determines multidimensional spread of observational info.

  20. Extended Kalman Filter • Next, obtain the new analysis state estimate. • As before, this represents the background estimate plus an optimally-weighted innovation (posed in observation space). • Minimizes the variance in xa by optimally weighting xf and y based upon their known error characteristics. • Difference from before: K(t) is computed directly from the model (flow-dependent) at each time (time-dependent)!

  21. Extended Kalman Filter • Finally, compute the analysis error covariance matrix. • Recall: simple, 1-D analog for analysis error variance • Background variance weighted by 1-weighting factor, or… σa2 Pa, 1  I (identity matrix), k KH(weighting with obs. operator)

  22. Extended Kalman Filter • Having created a new analysis, the model can then be integrated forward in time to the next time. • The above-computed analysis error covariance is then used to compute the forecast error covariance at the next time. • And on and on from there…

  23. EKF vs. 4D-Var • Evolution of the background error covariance. • EKF: explicitly computed over each assimilation cycle • 4D-Var: manifest via different H(xi) – yi values for each observation (e.g., different innovations for each time as model forecast evolves, giving time-dependency) • Characteristics of the model. • EKF: model is imperfect • Forecast errors due to the forward propagation of analysis errors and errors in the model itself (Q ≠ 0). • Need to have some way to describe Q (difficult) to obtain the best possible analysis!

  24. EKF vs. 4D-Var • 4D-Var: model is perfect • Forecast errors due to the forward propagation of analysis errors only (cost function analogy to Q = 0). • Method of data assimilation. • EKF: intermittent/sequential, least squares minimization • 4D-Var: continuous, cost function minimization • Computational expense. • EKF is more computationally expensive than is 4D-Var due to added computations at each assimilation time.

  25. Ensemble Kalman Filter (EnKF) • As the name implies, we start with an ensemble of atmospheric state analyses. • How is such an ensemble initially obtained? • Basis: a uniform background (e.g., from a global model) • Ensemble: constructed by perturbing the common background state estimate. • Typically random perturbations drawn from some estimate of the background error covariance matrix B. • May also incorporate the assimilation of observations with perturbations of magnitude smaller than the expected obs error.

  26. Ensemble Kalman Filter • Each member of the ensemble is then integrated forward in time. • The ensemble of short-range forecasts is used as the background for the next analysis. • Observations are assimilated (without any perturbations) to correct the forecast to the observations. • If the simulation has lateral boundaries, the LBCs must be perturbed at each time on the outermost domain. • The cycle then repeats from here.

  27. Ensemble Kalman Filter • Assimilation is accomplished using the EKF framework from before with slight modifications. • Definitions… • Ensemble mean forecast state: • i = individual ensemble member • K = total number of ensemble members • Forecast error covariance matrix: (of the sample of xf)

  28. Ensemble Kalman Filter • This equation replaces our earlier equation for Pf. • Provides a direct estimate of Q and how analysis errors have propagated forward without needing an adjoint! • Represents the average departure of each ensemble forecast state from the ensemble mean forecast state (i.e., not explicitly the analysis or true state). • : an estimate of Pf from a finite ensemble.

  29. Ensemble Kalman Filter • Similar to the EKF method, • From the new analysis state, the analysis error covariance may be computed similarly to . (Kalman gain matrix) (new analysis state)

  30. Ensemble Kalman Filter • Analysis error covariance: • This highlights the strengths of the EnKF: internal calculation of Pa and Pf based upon the ensemble’s flow-dependent error characteristics!

  31. Ensemble Kalman Filter initial analyses integrate forward in time estimate Pf compute K and perform DA obtain new analyses repeat

  32. Ensemble Kalman Filter vs. 4D-Var • Computation of Pf does not require the use of an adjoint model. • By contrast, even EKF uses one (see Eqn. 6.26b earlier). • This eliminates the assumptions (linearity, etc.) and additional code associated with the adjoint method. • Both methods are computationally demanding. • EnKF provides an elegant way to determine the temporally- and spatially-varying structure in the background error covariance from the ensemble.

  33. Ensemble Kalman Filter shaded: normalized covariance of SLP contour: sea-level pressure Emphasis is on the five dots (observation locations).

  34. Ensemble Kalman Filter • How does the ensemble estimate of sea level pressure vary as a function of the observations at these locations? • Recall: sensitivity metric defined in an earlier lecture • Can use covariance information to illustrate this! • Covariance is largest in data-sparse areas and/or along sharp gradients. • Forecast metric at the observation locations is most sensitive to SLP uncertainty in the shaded regions.

More Related