1 / 33

Chapter 27: Linear Filtering - Part I: Kalman Filter

Chapter 27: Linear Filtering - Part I: Kalman Filter. Standard Kalman filtering – Linear dynamics. Kalman Filter. Model dynamics – discrete time x k+1 = M(x k ) + W k+1 or Mx k + w k+1 x k : true state w k : model error x k ∊ R n , M: R n  R n , M ∊ R n x n Observation

wanda
Download Presentation

Chapter 27: Linear Filtering - Part I: Kalman Filter

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 27: Linear Filtering - Part I: Kalman Filter Standard Kalman filtering – Linear dynamics

  2. Kalman Filter • Model dynamics – discrete time • xk+1 = M(xk) + Wk+1 or Mxk + wk+1 • xk: true state • wk: model error • xk∊ Rn, M: Rn Rn, M ∊Rn x n • Observation • zk = h(xk) + vk or Hxk + vk • h: Rn Rm, H ∊Rm x n , vk∊ Rn • E(vk) = 0, COV(vk) = Rk

  3. Filtering, Smoothing, Prediction FN = {zi | 1 ≤ I ≤ N } [Wiener, 1942] [Kolmogorov, 1942] k < N Smoothing k = N filtering k > N Prediction Go to page 464 – classification

  4. Statement of Problem – Linear case • x0 ~ N(m0, P0) • xk+1 = Mkxk + wk+1, wk ~ N(0, Qk) • zk = Hkxk + vk, vk ~ N(0, Rk) • Given Fk = { zj | 1 ≤ j ≤ k }, find the best estimate of xk that minimizes the mean squared error E[(xk - )T(xk - )] = tr[ E(xk - ) (xk - )T] = tr[ ] • If is also unbiased => it is min. variance!

  5. Model Forecast Step • At time k = 0, F0 – initial information is given • Given , the predictable part of x1 is • Error in prediction 0 1

  6. Forecast covariance • Covariance • Predicted Observation • E[z1/ x1= x1f] = E[H1x1+v1|x1=x1f] = H1x1f • COV(z1|x1f) = E{[z1 – E(z1|x1f)][z1 – E(z1|x1f)]T} = E(v1v1T) = R1

  7. Basic idea • At time k = 1 • Fast Forward • Time k-1 to k x1f z1 P1f R1 0 1 xkf zk Pkf Rk k k-1

  8. Forecast from k-1 to k • ∴

  9. Observations at time k • Model predicted observations

  10. Data Assimilation Step prior Kalman Gain Innovation

  11. Posterior estimate – also known as analysis • Substitute and simplify

  12. Covaiance of analysis • => where

  13. Conditions on the Kalman gain – minimization of total variance • Kk = PkfHkTDk-1 = PkfHkT[HkPkfHkT + Rk]-1 • 1. • 2.

  14. Comments on the Kalman gain • 3. An Interpretation of Kk: n = m, Hk = I • Let Pkf = Diag [ P11f P22f … Pnnf] Rk = Diag [ R11 R22 … Rnn] • Kk = PkfHkT[HkPkfHkT + Rk]-1 = Pkf[Pkf + Rk]-1 = Diag[ P11f/(P11f+R11) , P22f/(P22f+R22), ….., Pnnf/(Pnnf+Rnn)] • = xkf + Kk[z-Hxkf] = xkf + Kk[z-xkf] = (I-Kk)xkf + Kkz • ∴ If Piif is large, zi,k has a larger weight

  15. Comments – special cases • 4. is independent of observations • 5. No observations • xkf = Pkf = for all k ≥ 0 • xkf = Mk-1xk-1f = Mk-1Mk-2Mk-3…..M1M0x0f • Pkf = Mk-1Pk-1fMk-1T + Qk = M(k-1:0)P0MT(k-1:0) + ∑ M(k-1:j+1)Qj+1MT(k-1:j+1) where M(i,j) = MiMi-1Mi-2….Mj, Qj ≡ 0 • Pk+1f = M(k-1:0)P0MT(k-1:0)

  16. Special cases - continued • 6. No Dynamics • Mk =I, Wk ≡ 0, Qk ≡ 0 • xk+1 = xk = x • zk = Hkx + vk • xkf = with = E(x0) • Pkf = with = P0 • => • Same as (17.2.11) – (17.2.12) Static case

  17. Special cases • 7. When observations are perfect • Rk ≡ 0 • => Kk = PkfHkT[HkPkfHkT + Rk]-1 = PkfHkT[HkPkfHkT]-1 • Hk: m x n, Pkf: n x n, HkT: n x m • => [HkPkfHkT]-1: m x m • Recall: • From (27.2.19):

  18. Special cases • ∴ ( I- KkHk ) = ( I – KkHk )2, idempotent • Fact: Idempotent matrices are singular • => Rank of ( I- KkHk ) ≤ n – 1 • ∴ Rank( ) ≤ min {Rank( I- KkHk ), Rank( Pkf )} ≤ n – 1 • ∴ Rank of ≤ n – 1 • ∴ When Rk is small, this will cause computational instability

  19. Special cases • 8. Residual Checking • rk = zk – Hkxkf = innovation • = xkf + Kkrk • rk = zk –Hkxkf = Hkxk + vk – Hkxkf = Hk(xk – xkf) + vk = Hkekf + vk • ∴ COV(rk) = HkPkfHkT + Rk • ∴ By computing rk and its covariance, we can check if the filter is working O.K.

  20. 10. Computational Cost

  21. Example 27.2.1 Scalar Dynamics with No Observation • a > 0, wk ~ N( 0, q), x0 ~ N(m0, P0) • xk = axk-1 + wk • E(xk) = akE(x0) = akm0 • Pk = Var(xk) = Var(axk-1 +wk) = a2Pk-1 + q • ∴ Pk = a2kP0 + q[(a2k -1)/(a2 – 1)]

  22. Scalar dynamics • Note: For a given m0, P0, q, the behavior of the moments depends on a • 1. 0 < a < 1 • lim E(xk)  0, lim Pk = q/(1-a2) • 2. 1 < a < ∞ • lim E(xk) = 0, lim Pk = ∞ • 3. a = 1 • xk = x0 + ∑wk • E(xk) = m0 • Pk = P0 + kq

  23. Example 27.2.2 Kalman Filtering • xk+1 = axk + wk+1, wk+1~(0,q) • zk = hxk + vk, vk~N(0,r) • xk+1f = a • Pk+1f = a2 + q • = xkf + Kk[zk – hxkf] • Kk = Pkfh[h2Pkf + r]-1 = hr-1 • = Pkf – (Pkf)2h2[h2Pkf + r]-1 = Pkfr[h2Pkf+r]-1

  24. Recurrences: Analysis of Stability • HW. 1 • xk+1f = a(1-Kkh)xkf + aKkzk • ek+1f = a(1-Kkh)ekf + aKkvk + wk+1 • Pk+1f = a2 +q • = Pkfr(h2Pkf+r)-1

  25. Example continued • Pk+1f = a2Pkfr / (h2Pkf+r) + q • Pk+1f / r = a2Pkf / (h2Pkf + r) + q/r = a2(Pkf/r) / [h2(Pkf/r) + 1] + q/r • Pk+1 = a2Pk/(h2Pk+1) + α • α = q/r (ratio) • Pk = Pkf/r • Riccati equation ( First-order, scalar, nonlinear)

  26. Asymptotic Properties • Let h = 1 • Pk+1 = a2Pk/(Pk+1) + α • Let δk = Pk+1 – Pk = a2Pk/(Pk+1) – Pk + α = [-Pk2 + Pk(a2- 1 + α) + α]/(Pk+1) • ∴ δk = g(Pk)/(Pk+1) • where g(Pk) = -Pk2 + Pk(a2 + α - 1) + α

  27. Example continued • When Pk+1 = Pk, => δk = 0 => equilibrium • => δk = 0 if g(Pk) = 0 • -Pk2 + Pk(a2 + α – 1) + α = 0 • -Pk2 + β Pk + α = 0 • Evaluate the derivative of g(.) at P* and P* • g’(Pk) = -2Pk + β β

  28. Example continued • ∴ P* is an attractor – stable P* is a repellor – unstable • Thus, • => P* = a2P*/(P*+1) + α

  29. Rate of Convergence • Let yk = pk – p* , pk+1 = a2pk/(pk+1) + α • yk+1 = pk+1 – p* = [a2pk/(pk+1) + α] - [a2p*/(p*+1) + α] = a2pk/(pk+1) - a2p*/(p*+1) = a2(pk – p*)/[(1+pk)(1+p*)] = a2yk/[(1+pk)(1+p*)] • ∴ 1/yk+1 = [(1+pk)(1+p*)]/ a2yk = [(1+yk+p*)(1+p*)] / a2yk = [(1+p*)/a]2/yk + (1+p*)/a2

  30. Rate convergence - continued • zk = 1/yk • => zk+1 = czk + b where c = [(1+p*)/a]2 and b = (1+p*)/a2 • Iterating: • ∴ when c> 1 (ie) c = [(1+p*)/a]2 >1 • When this is true, yk 0 at exp. rate

  31. Rate of convergence - continued • From (27.2.38) h = 1 • Analysis Covariance Converges

  32. Stability of the Filter • h = 1 • ek+1f = a(1-Kkh)ekf + aKkvk + wk+1 • -- homo part • Kk = pkf/(pkf+r) = pk/(pk+1) • 1 – Kk = 1/(pk+1) • ∴ • ∴

More Related