1 / 25

Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre

Kalman Filters for Fun. Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre. Basic Kalman Filter Equations. Prediction. Updating (correction). Definitions. System state: mean estimate. x - vector of size q e.g. for a single channel,

latham
Download Presentation

Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Kalman Filters for Fun Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre

  2. Basic Kalman Filter Equations Prediction Updating (correction)

  3. Definitions System state: mean estimate x - vector of size q e.g. for a single channel, constant acceleration model x1 = position x2 = velocity x3 = acceleration

  4. Definitions System state: covariance (uncertainty) P - matrix of size q x q e.g. P11 = uncertainty (variance) in position P22 = uncertainty in velocity P12 = covariance in position/velocity . . .

  5. Definitions State dynamics: state transition matrix A - matrix of size q x q e.g. for the constant acceleration example

  6. Definitions State dynamics: process noise (uncertainty) Q - matrix of size q x q e.g. (modelling acceleration as constant with a long duration; 3rd derivative is random and instantaneous)

  7. Definitions Measurement space: observation matrix C - matrix of size r x q e.g. for the simple single measurement constant acceleration model: C = [1 0 0]

  8. Definitions Measurement space: innovation sequence • e - error between observation and prediction • a.k.a. the innovation sequence • should be zero-mean, random white noise • if not, there is a problem with either • the model, or • a sensor

  9. Definitions Measurement space: measurement noise matrix R - matrix of size r x r e.g. for the single measurement case, R = 2 Ris the covariance matrix of the measurement noise. It is usually diagonal (i.e. uncorrelated sensor noise).

  10. Definitions Correction phase: Kalman gain matrix K - matrix of size q x r Kis the optimal correction factor to obtain the Minimum Mean Squared Error estimate of the system state mean and covariance

  11. Definitions Confidence: Relative contributions to uncertainty Predicted measurement - uncertainty due to model Actual measurement - uncertainty due to sensor noise Relative weighting for the innovation error

  12. Definitions Correction phase: system stiffness Relative weighting (Gain) for the innovation error Model uncertainty >> Measurementnoise  Believe Sensor (e.g.Gain is high) Low system stiffness, low lag Model uncertainty << Measurementnoise  Believe Model (e.g.Gain is low) High system stiffness, high lag

  13. Definitions Correction phase: updating state estimates “Best” estimates of the system state mean andcovariance, at time k, conditioned on allmeasurements up to and including time k

  14. Comparison with Fixed-Coefficient Filters If the A, C, Q and R matrices do not vary with kthen the single channel CA Kalman filter settlesdown have steady-state Kalman gains which are equivalent to a standard -filter, but withoptimum coefficients selected automatically. This approach is known as Weiner filtering. It is computationally much simpler than a Kalman filter, but the steady-state assumptions may not be valid in many practical cases.

  15. Advantages of Kalman Filters • Copes with variable A, C, Q and R matrices • Copes with the large uncertainty of the initialisation phase • Copes with missing data • Provides a convenient measure of estimation accuracy (via the covariance matrix P) • Fuses information from multiple-sensors

  16. Disadvantages of Kalman Filters • Computationally complex (especially for large numbers of sensor channels r) • Requires conditional independence of the measurement errors, sample-to-sample • Requires linear models for state dynamics and observation processes (A and C) • Getting a suitable value for Q (a.k.a. tuning) can be something of a black art

  17. Dealing with the disadvantages [1] • Computationally complex (especially for large numbers of sensor channels r) • Computers are getting faster all the time • New algorithms for fast matrix inversion Matrix Matrix

  18. Dealing with the disadvantages [2] • Requires conditional independence of the measurement errors, sample-to-sample • The independence criterion can be removed by • using Covariance Intersection updating A B C a,b,c

  19. Covariance Intersection vs Kalman updates Standard Kalman filter update: Covariance Intersection update: Kalman filter covariance (alternative form):

  20. Dealing with the disadvantages [3] • Requires linear models for state dynamics and observation processes (A and C) • This can be overcome by using the EKF • (Extended Kalman Filter), but only at the • expense of your sanity and all your free time • A superior alternative to EKF is the • Unscented Filter

  21. Unscented Filter - General Problem n-dimensional vector random variable x with mean and covariance We want to predict the distribution of an m-dimensional vector random variable y, where y is related to x by the non-linear transformation • In filtering there are two such transformations: • the state transition • the observation

  22. Unscented Filter - Solution Compute the set  of 2n points from the rows or columns of the matrices This set is zero mean with covariance P. Compute a set of points with the same covariance, but with mean as Transform each point as Compute and by computing the mean and covariance of the 2n points in the set This is computationally much simpler and significantly more accurate (lower MSE) than the corresponding EKF

  23. Dealing with the disadvantages [4] • Getting a suitable value for Q (a.k.a. tuning) can be something of a black art • There are many heuristics for dealing with process • noise (mainly to keep the filter stable); one method • is to switch between alternative models • We have Matlab toolboxes for learning Q (and all • the other matrices) from empirical data, using a • technique called Expectation Maximisation (EM)

  24. Summary • The Kalman Filter is the way to fuse multiple data sources, providing that their errors are conditionally independent • Covariance Intersection is the conservative way to deal with inter-dependence in distributed systems • The Unscented Filter is universally superior to the EKF for all cases involving non-linear transformations • Many heuristics exist to extend and improve the basic Kalman filter for tracking and sensor bias detection

  25. End of Talk I'm sure you've had enough Unless there are any questions – make ‘em brief

More Related