1 / 48

Introduction to Data Assimilation

Introduction to Data Assimilation. Peter Jan van Leeuwen IMAU. Basic estimation theory. E{e 0 } = 0 E{e m } = 0 E{e 0 2 } = s 0 2 E{e m 2 } = s m 2 E{e 0 e m } = 0. T 0 = T + e 0. T m = T + e m. Assume a linear best estimate: T n = a T 0 + b T m with T n = T + e n

iolani
Download Presentation

Introduction to Data Assimilation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Data Assimilation Peter Jan van Leeuwen IMAU

  2. Basic estimation theory E{e0} = 0 E{em} = 0 E{e02} = s02 E{em2} = sm2 E{e0em} = 0 T0 = T + e0 Tm = T + em Assume a linear best estimate: Tn = a T0 + b Tm with Tn = T + en Find a and b such that: E{en} = 0 b = 1 - a sm2 a = __________ E{en2} minimal s02 + sm2

  3. Basic estimation theory sm2 s02 Solution: Tn = _______ T0 + _______ Tm s02 + sm2 s02 + sm2 1 1 1 ___ = ___ + ___ and sn2 s02 sm2 Just least squares!!! Note: sn smaller than s0 and sm ! Best Linear Unbiased Estimate BLUE

  4. Can we generalize this? • More dimensions • Nonlinear estimates (why linear?) • Observations that are not directly modeled • Biases

  5. The basics: probability density functions P(u) 0.5 1.0 u (m/s)

  6. The model pdf P[u(x1),u(x2),T(x3),.. u(x1) u(x2) T(x3)

  7. Observations • In situ observations: e.g. sparse hydrographic observations, irregular in space and time • Satellite observations: e.g. of the sea-surface

  8. Data assimilation: general formulation NO INVERSION !!!

  9. Bayes’ Theorem Conditional pdf: Similarly: Combine: Even better:

  10. Filters and smoothers Filter: solve 3D problem several times Smoother: solve 4D problem once Note: the model is (highly) nonlinear!

  11. Pdf evolution in time Model equation: Pdf evolution: Kolmogorov’s equation (Fokker-Planck equation)

  12. (Ensemble) Kalman Filter Only consider mean and covariance At observation times: - Assume p(d|) and p() are Gaussian, and use Bayes • The mean of the product of 2 Gaussians is equal to linear • combination of the 2 means: E{|d} = a E{} + b E{d|} - But we have seen this before in the first example !

  13. (Ensemble) Kalman Filter II sm2 s02 Old solution: Tn = _______ T0 + _______ Tm s02 + sm2 s02 + sm2 But now for covariance matrices: mnew = R (P+R)-1mold + P (P+R)-1d Kalman Filter notation: mnew = mold + K (d - H mold) with Kalman gain K = PHT (HPHT + R)-1

  14. The error covariance:tells us how model variables co-vary For example SSH at point x with SSH at point y: PSSH SSH(x,y) = E{ (SSH(x) - E{SSH(x)})(SSH(y) - E{SSH(y)}) } Or SSH at point x and SST at point y: PSSH SST(x,y) = E{ (SSH(x) - E{SSH(x)})(SST(y) - E{SST(y)}) }

  15. Spatial correlation of SSHand SST in the Indian Ocean x x Haugen and Evensen, 2002

  16. Covariancesbetween modelvariables Haugen and Evensen, 2002

  17. Summary on Kalman filters: • Gaussian pdf’s for model and observations • Propagation of error covariance P • If N operations for state vector evolution, • then N2 operations for P evolution… • Problems: • Nonlinear dynamics, so non-Gaussian statistics • Evolution equation for P not closed • Size of P (> 1,000,000,000,000) ….

  18. Propagation of pdf in time:ensemble or particle methods

  19. Example of Ensemble Kalman Filter (EnKF) MICOM model with 1.3 million model variables Observations: Altimetry, infra-red Validated with hydrographic observations

  20. SST (-2K to +2K)

  21. SSH (-10 cm to +10 cm)

  22. RMS difference with XBT-data

  23. Spurious covariances ?

  24. Localization in EnKF-like methods EnKF: with Kalman gain Local updating: restrict update using only local covariances: Schurproduct, or direct cut-off

  25. Ensemble Kalman Smoother (EnKS) Basic idea: use covariances over time. Efficient implementation: 1) run EnKF, store ensemble at observation times 2) add influence of data back in time using covariances at different times

  26. Nonlinear filters Probability density function of layer thickness of first layer at day 41 during data-assimilation No Kalman filter No variational methods

  27. The particle filter(Sequential Importance Resampling SIR) Ensemble with

  28. Particle filter

  29. SIR-results for a quasi-geostrophic ocean model around South Africa with 512 members

  30. Smoothers: formulation Model error Initial error Observation error Boundary errors etc. etc.

  31. Smoothers: prior pdf

  32. Smoothers: posterior pdf Assume all errors are Gaussian: model initial observation

  33. Smoothers in practice: Variational methods Assume Gaussian pdf for model errors and observations: in which model dynamics initial condition model-obs misfit Find min J from variational derivative: J is costfunction or penalty function

  34. Gradient descent methods J 1’ 3 4 6 5 2 1 model variable

  35. The Euler-Lagrange equations Forward integrations Backward integrations Nonlinear two-point boundary value problem solved by linearization and iteration

  36. 4D-VAR strong constraint Assume model errors negligible: In practice only a few linear and one or two nonlinear iterations are done…. No error estimate (Hessian too expensive and unwanted…)

  37. Example 4D-VAR: GECCO • 1952 through 2001 on a 1º global grid with 23 layers in the vertical, using the ECCO/MIT adjoint technology. • Model started from Levitus and NCEP forcing and uses state of the art physics modules (GM, KPP). • Control parameters: initial temperature and salinity fields and the time varying surface forcing,

  38. Residual values can reveal inconsistencies in data sets (here geoid). The Mean Ocean Circulation, global

  39. Bryden et al. (2005) MOC at 25N

  40. Error estimates J X Local curvature from second derivative of J, the Hessian

  41. Other smoothers Representers, PSAS, Ensemble Kalman smoother, …. Simulated annealing (Metropolis Hastings), …

  42. Relations between model variables • Covariance gives linear correlations between variables • Adjoint gives linear correlation between variables along a • nonlinear model run (linear sensitivity) • Pdf gives full nonlinear relation between variables (nonlinear sensitivity)

  43. Parameter estimation Bayes: Looks simple, but we don’t observe model parameters…. We observe model fields, so: in which Hhas to be found from model integrations

  44. Example: ecosystem modeling 29 parameters of which 15 were estimated and 14 were kept Fixed.

  45. Estimated parametersfrom particle filter (SIR) All other methods that were tried, including 4D-VAR and EnKF failed. Losa et al, 2001

  46. Estimate size of model error Brasseur et al, 2006

  47. Why data assimilation? • Forecasts • Process studies • Model improvements - model parameters - parameterizations • ‘Intelligent monitoring’

  48. Conclusions • Evolution of pdf with time is essential ingredient • Filters: dominated by Kalman-like methods, but moving towards nonlinear methods (SIR etc.) • Smoothers: dominated by 4D-VAR, New ideas needed!

More Related