an introduction to kalman filtering by arthur pece aecp@diku dk n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
An Introduction to Kalman Filtering by Arthur Pece aecp@diku.dk PowerPoint Presentation
Download Presentation
An Introduction to Kalman Filtering by Arthur Pece aecp@diku.dk

Loading in 2 Seconds...

play fullscreen
1 / 29

An Introduction to Kalman Filtering by Arthur Pece aecp@diku.dk - PowerPoint PPT Presentation


  • 82 Views
  • Uploaded on

An Introduction to Kalman Filtering by Arthur Pece aecp@diku.dk. Generative model for a generic signal. Basic concepts in tracking/filtering. State variables x ; observation y : both are vectors Discrete time: x ( t ), y ( t ), x ( t +1), y ( t +1) Probability P

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'An Introduction to Kalman Filtering by Arthur Pece aecp@diku.dk' - marcus


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
basic concepts in tracking filtering
Basic concepts in tracking/filtering
  • State variables x; observation y: both are vectors
  • Discrete time: x(t), y(t), x(t+1), y(t+1)
  • Probability P
  • pdf [density] p(v) of vector variable v :

p(v*) = lim P(v* < v < v*+dv) / dv

dv->0 .

basic concepts gaussian pdf
Basic concepts:Gaussian pdf

A Gaussian pdf is completely characterized by 2 parameters:

  • its mean vector
  • its covariance matrix
basic concepts prior and likelihood
Basic concepts: prior and likelihood
  • Prior pdf of variable v: in tracking, this is usually the probability conditional on the previous estimate: p[ v(t) | v(t-1) ]
  • Likelihood: pdf of the observation, given the state variables: p[ y(t) | x(t) ]
basic concepts bayes theorem
Basic concepts:Bayes’ theorem
  • Posterior pdf is proportional to prior pdf times likelihood:

p[ x(t) | x(t-1), y(t) ] =

p[ x(t) | x(t-1) ] p[ y(t) | x(t) ] / Z

where

Z = p[ y(t) ]

basic concepts recursive bayesian estimation
Basic concepts:recursive Bayesian estimation

Posterior pdf given the set y(1:t) ofall observations up to time t:

p[ x(t) | y(1:t) ] =

p[ y(t) | x(t) ] . p[ x(t) | x(t-1) ] .

p[ x(t-1) | y(1:t-1) ] / Z1

basic concepts recursive bayesian estimation1
Basic concepts:recursive Bayesian estimation

p[ x(t) | y(1:t) ] =

p[ y(t) | x(t) ] . p[ x(t) | x(t-1) ] .

p[ y(t-1) | x(t-1) ] . p[ x(t-1) | x(t-2) ] .

p[ x(t-2) | y(1:t-2) ] / Z2

basic concepts recursive bayesian estimation2
Basic concepts:recursive Bayesian estimation

p[ x(t) | y(1:t) ] =

p[ y(t) | x(t) ] . p[ x(t) | x(t-1) ] .

p[ y(t-1) | x(t-1) ] . p[ x(t-1) | x(t-2) ] .

p[ y(t-2) | x(t-2) ] . p[ x(t-2) | x(t-3) ] .

… / Z*

kalman model in words
Kalman model in words
  • Dynamical model: the current state x(t) is a linear (vector) function of the previous state x(t-1) plus additive Gaussian noise
  • Observation model: the observation y(t) is a linear (vector) function of the state x(t)plus additive Gaussian noise
problems in visual tracking
Problems in visual tracking
  • Dynamics is nonlinear, non-Gaussian
  • Pose and shape are nonlinear, non-Gaussian functions of the system state
  • Most important: what is observed is not image coordinates, but pixel grey-level values: a nonlinear function of object shape and pose, with non-additive, non-Gaussian noise
back to kalman
Back to Kalman
  • A Gaussian pdf, propagated through a linear system, remains Gaussian
  • If Gaussian noise is added to a variable with Gaussian pdf, the resulting pdf is still Gaussian (sum of covariances)

---> The predicted state pdf is Gaussian if the previous state pdf was Gaussian

---> The observation pdf is Gaussian if the state pdf is Gaussian

kalman posterior pdf
Kalman posterior pdf
  • The product of 2 Gaussian densities is still Gaussian (sum of inverse covariances)

---> the posterior pdf of the state is Gaussian if prior pdf and likelihood are Gaussian

kalman filter
Kalman filter
  • Operates in two steps: prediction and update
  • Prediction: propagate mean and covariance of the state through the dynamical model
  • Update: combine prediction and innovation (defined below) to obtain the state estimate with maximum posterior pdf
note on the symbols
Note on the symbols
  • From now on, the symbol x represents no longer the ”real” state (which we cannot know) but the mean of the posterior Gaussian pdf
  • The symbol A represents the covariance of the posterior Gaussian pdf
  • xand A represent mean and covariance of the prior Gaussian pdf
kalman prediction
Kalman prediction
  • Prior mean: previous mean vector times dynamical matrix:

x(t) = Dx(t-1)

  • Prior covariance matrix: previous covariance matrix pre- AND post-multiplied by dynamical matrix PLUS noise covariance:

A(t) = DT A(t-1) D + N

kalman update
Kalman update

In the update step, we must reason backwards, from effect (observation) to cause (state): we must ”invert” the generative process.

Hence the update is more complicated than the prediction.

kalman update continued
Kalman update (continued)

Basic scheme:

  • Predict the observation from the current state estimate
  • Take the difference between predicted and actual observation (innovation)
  • Project the innovation back to update the state
kalman innovation
Kalman innovation

Observation matrix F

The innovation v is given by:

v = y - F x

Observation-noise covariance R

The innovation has covariance W:

W = F TA F + R

kalman update state mean vector
Kalman update: state mean vector
  • Posterior mean vector: add weighted innovation to predicted mean vector
  • weigh the innovation by the relative covariances of state and innovation:

larger covariance of the innovation

--> larger uncertainty of the innovation

--> smaller weight of the innovation

kalman gain
Kalman gain
  • Predicted state covariance A
  • Innovation covariance W
  • Observation matrix F
  • Kalman gain K = A F TW-1
  • Posterior state mean:

s = s + K v

kalman update state covariance matrix
Kalman update: state covariance matrix
  • Posterior covariance matrix: subtract weighted covariance of the innovation
  • weigh the covariance of the innovation by the Kalman gain:

A = A- K T W K

  • Why subtract? Look carefully at the equation:

larger innovation covariance

--> smaller Kalman gain K

--> smaller amount subtracted!

kalman update state covariance matrix continued
Kalman update: state covariance matrix (continued)
  • Another equivalent formulation requires matrix inversion (sum of inverse covariances)

Advanced note:

  • The equations given here are for the usual covariance form of the Kalman filter
  • It is possible to work with inverse covariance matrices all the time (in prediction and update): this is called the information form of the Kalman filter
summary of kalman equations
Summary of Kalman equations
  • Prediction :

x(t) = Dx(t-1)

A(t) = DT A(t-1) D + N

  • Update:

innovation: v = y - F x

innov. cov: W = F TA F + R

Kalman gain: K = A F TW-1

posterior mean:s = s + K v

posterior cov: A = A - K T W K

kalman equations with control input u
Kalman equationswith control inputu
  • Prediction :

x(t) = Dx(t-1) + Cu(t-1)

A(t) = DT A(t-1) D + N

  • Update:

innovation: v = y - F x

innov. cov: W = F TA F + R

Kalman gain: K = A F TW-1

posterior mean:s = s + K v

posterior cov: A = A - K T W K