1 / 47

Definition of data assimilation problem and some solutions

Summer School on Ocean Observation with Remote Sensing Satellites 18-23 June 2010. Definition of data assimilation problem and some solutions. Lecturer: Srdjan Dobricic CMCC, Bologna. Statistical background (model and observational spaces).

Download Presentation

Definition of data assimilation problem and some solutions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Summer School on Ocean Observation with Remote Sensing Satellites 18-23 June 2010 Definition of data assimilation problem and some solutions Lecturer: Srdjan Dobricic CMCC, Bologna

  2. Statistical background (model and observational spaces) - Model state vector (may contain all model points – im*jm*km*param) - Observational state vector

  3. Statistical background (introduction) The model state vector has the temporal dimension which spans the analysis window: xk x1 x0

  4. Statistical background (introduction) The observational vector also has the temporal dimension which spans the analysis window: yk y1 y0

  5. Statistical background (introduction) Model and observational spaces are connected by the observational operator H (non-linear): It is assumed that the modelled process is of the Markov type: - Model operator (non-linear) Compare to: Model dynamical operator (non-linear)

  6. Statistical background (introduction) The posterior conditional pdf of the inverse solution is given by: - Probability density of observations given the model state - Probability density of the model estimate - Observational state vector - Model state vector

  7. Definition of cost function (Gaussian model)

  8. Definition of cost function (Gaussian model) The most likelyhood model state is the one which gives the maximum of:

  9. Definition of cost function (Gaussian model) At this point the magnitude of the exponent has the minimum value. Therefore, in order to find the most likelyhood model state vector we must find the minimum of the following cost function:

  10. Definition of cost function (Gaussian model) Errors Errors contain observational errors and “representativnes” errors

  11. Definition of cost function (Gaussian model) - a priori information on model parameters. It can be any reasonable a priori estimate.

  12. Definition of error covariances (Gaussian model) Errors are erros of the initial state estimate Errors and/or are model erros Is there some other way to define a priori model state estimate?

  13. Definition of error covariances (Gaussian model) C - covariances of model state errors B0- covariances of initial model state errors Q - covariances of model errors A - covariances between initial model state errors and model errors

  14. Definition of error covariances (Gaussian model) Now, in order to simplify the mathematics, we may assume that A=0: We assume that errors of initial state are independent from model errors Can we do that?

  15. Definition of error covariances (Gaussian model) If A=0 the cost function becomes:

  16. Definition of cost function (Gaussian model) We can further simplify the cost function by assuming a special structure of error covariances in Q. Often it is assumed that model errors are uncorrelated in time (unbiased):

  17. Definition of cost function (Gaussian model) We can also assume a special structure of error covariances in R. Often it is assumed that observational and representativnes errors are uncorrelated in time (unbiased):

  18. Definition of cost function (Gaussian model) In this case the cost function becomes: What is the best definition of a priori model state estimates ?

  19. Definition of cost function (Gaussian model) If: If:

  20. Definition of cost function (Gaussian model) Another simplification of the cost function is made by the assumption that model errors are temporally constant. In this case we can write: The cost function becomes:

  21. Definition of 4D-VARcost function Furthermore, we can simplify the cost function by assuming that the model is perfect: The cost function becomes: Usually in the meteorological and oceanographic literature this cost function is named 4D-VAR. The assumption of the perfect model is implicitly assumed.

  22. Minimization of 4D-VARcost function: Newton method The zero of a function can be efficiently found using the gradient. f(x) x3 x2 x1 x At the minimum the gradient of the cost function g(x) is equal to zero.

  23. Minimization of 4D-VARcost function: Quasi-Newton and conjugate gradient methods In the quasi-Newton method the second derivative of J is approximated by symmetric positive definite matrices which are updated in each iteration of the minimizer by a symmetric update: In the conjugate gradient method the second derivative of J is estimated by using orthogonal directions. Again we need the information about the gradient of J. There is a freely available software package which applies BFGS formula with the limited computer memory (RAM).

  24. Tangent linear approximation Observational operator Small perturbation H H H(x) x xk (Xk+p)

  25. Tangent linear approximation Observational operator:

  26. Tangent linear approximation Model operator

  27. Gradient of 4D-VARcost function

  28. Linearized 4D-VAR cost function (perfect model)

  29. Minimization of 4D-VARcost function Variable transformation

  30. Minimization of 4D-VARcost function Iterations of the minimizer • Calculate cost function • Calculate the gradient • Update the second derivative estimate • Estimate the next model state vector • If the gradient is sufficiently small stop There are freely available software packages to do this

  31. Steps in minimization of cost function 1. Calculate misfits dk in a single run of the non-linear model fron time 0 to time n. 2. Start minimization by setting v=0. In the first step the cost function is just a veighted sum of squares of misfits:

  32. Steps in minimization of cost function (incremental method) 3. In each following step calculate the cost function and the gradient: 3.1: Space transformation (model of B0) 3.2: Model integrationfrom 0 to k (single run) Mapping to observational space 3.3: Transform from observational to model space 3.4: Integration from k to 0 by the adjoint (single run). At time step n all adjoint variables are initialized by 0. At each time step they are forced by contributions from misfits. 3.5: Transform back to control space 3.6:

  33. Steps in minimization of cost function 4. Linearize the model around the best estimate of the model trajectory and repeat steps 1-3.

  34. Analysis error covarinace matrix The Hessian of the cost function is: We will show that the analysis error covariance matrix is the inverse of the Hessian of the cost function. The gradient will be written in the following way:

  35. Analysis error covarinace matrix The terms in the gradient of the cost function from the previous equation can be rearranged in the following way: Multiplying the left and the right sides by their respective transposes gives:

  36. Analysis error covarinace matrix We assumed that and From definitions: It follows that

  37. Advantages and disadvantages of 4D-VAR • Advantages: • In the case of the perfect linear model 4D-VAR gives the model state estimates which are equivalent to Kalman smoother estimates • This is achieved at a smaller computational cost than with the full Kalman smoother algorithm • Outstanding problems: • If the model is non-linear the computationally efficient incremental solution is accurate only on short time windows. • The inclusion of the model error complicates the algorithm significantly • The analysis error covariance matrix is available only at the beginning of the assimilation window (quite useless)

  38. 3D-VARcost function 3D-VAR is an approximation of 4D-VAR in which it is assumed that the model that propagates background error covariances is the identity matrix. Therefore the assimilation window has to be adequately short.

  39. Steps in minimization of cost function

  40. Advantages and disadvantages of 3D-VAR (in comparison to 4D-VAR) • Advantages: • It is computationally much cheaper • The non-linearity is a less important problem, because the model trajectory is corrected in each step • Outstanding problems: • 3D-VAR is a filter • The background error covariance matrix is not optimally estimated

  41. Definition of Kalman Filter (Gaussian model)

  42. Relationship to variational methods • If the model error is uncorrelated in time Kalman filter and linear 4DVAR with an imperfect model give the same model state estimate at the end of the time window. • 4DVAR is a smoother: It estimates all model states by considering all observations. • Kalman filter is very simple to apply when the model state is small. • The ensemble Kalman filter is an approximation of the Kalman filter which uses an ensemble (typically small) of forecasts and analyses in order to approximate the full dimension of Kalman filter matrices

  43. Extended Kalman filter equations –prediction step

  44. Extended Kalman filter equations – update step

  45. Optimal interpolation (OI)

  46. Relationship to variational methods • OI gives the same solution as 3DVAR • An advantage is the simplicity of equations • A disadvantage is the limited application to simple observational operators and error covariance models, because it requires the inversion of matrices • Again, the ensemble method may simplify the calculus

  47. Suggested literature • Boutier, F., and P. Courtier, 1999. Data assimilation concepts and methods, ECMWF report. (available from www.ecmwf.int) • Lewis, J., S. Lakshmivarahan, and S. Dhall, 2006: Dynamic Data Assimilation, Cambridge Univ. Press, 654 pp.

More Related