1 / 28

State Estimation and Kalman Filtering

State Estimation and Kalman Filtering. Sensors and Uncertainty. Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use sensor models to estimate ground truth, unobserved variables, make forecasts. How does this relate to motion?. A robot must:

maddox
Download Presentation

State Estimation and Kalman Filtering

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. State Estimation and Kalman Filtering

  2. Sensors and Uncertainty • Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) • Use sensor models to estimate ground truth, unobserved variables, make forecasts

  3. How does this relate to motion? • A robot must: • Know where it is (localization) • Model its environment to avoid collisions (mapping, model building) • Move in order to get a better view of an object (view planning, inspection) • Predict how other agents move (tracking)

  4. Intelligent Robot Architecture Percept (raw sensor data) State estimationMappingTrackingCalibrationObject recognition… Sensing algorithms Models Planning Action

  5. State Estimation Framework Dynamics model • Sensor model(aka observation model) • State => observation • Observations may be partial, noisy • Dynamics model(aka transition model) • State => next state • Transition may be noisy, control-dependent • Probabilistic state estimation • Given observations, estimate probability distribution over state xt xt+1 Sensor model zt

  6. X0 X1 X2 X3 Markov Chain • Given an initial distribution and the dynamics model, can predict how the state evolves (as a probability distribution) over time

  7. X0 X1 X2 X3 Hidden Markov Model • Use observations to get a better idea of where the robot is at time t Hidden state variables Observed variables z1 z2 z3 Predict – observe – predict – observe…

  8. General Bayesian Filtering • Compute P(xt|zt), given zt, prior on P(xt-1) • Bayes rule: • P(xt|zt) = a P(zt|xt)P(xt) • P(xt) = integral [ P(xt | xt-1) P(xt-1) dxt-1 ] • 1/a = integral [ P(zt|xt)P(xt) dyt ] Nasty, hairy integrals

  9. Key Representational Issue • How to represent and perform calculations on probability distributions?

  10. Kalman Filtering • In a nutshell • Efficient filtering in continuous state spaces • Gaussian transition and observation models • Ubiquitous for tracking with noisy sensors, e.g. radar, GPS, cameras

  11. Kalman Filtering • Closed form solution for HMM • Assuming linear Gaussian process • Transition and observation function are linear transformation + multivariate Gaussian noise y = A x + e e ~ N(m,S)

  12. Linear Gaussian Transition Model for Moving 1D Point • Consider position and velocity xt, vt • Time step h • Without noise xt+1 = xt + h vtvt+1 = vt • With Gaussian noise of std s1 P(xt+1|xt)  exp(-(xt+1 – (xt + h vt))2/(2s12) i.e. xt+1 ~ N(xt + h vt, s1)

  13. vh s1 Linear Gaussian Transition Model • If prior on position is Gaussian, then the posterior is also Gaussian N(m,s)  N(m+vh,s+s1)

  14. Linear Gaussian Observation Model • Position observation zt • Gaussian noise of std s2 zt ~ N(xt,s2)

  15. Observation probability Linear Gaussian Observation Model • If prior on position is Gaussian, then the posterior is also Gaussian Posterior probability Position prior •  (s2z+s22m)/(s2+s22) s2s2s22/(s2+s22)

  16. Kalman Filter • xt ~ N(mx,Sx) • xt+1 = F xt + g + v • zt+1 = H xt+1 + w • v ~ N(0,Sv), w ~ N(0,Sw) Dynamics noise Observation noise

  17. Two Steps • Maintain mt, St the gaussian distribution over state at time t • Predict • Compute distribution of xt+1 using dynamics model alone • Update • Compute xt+1|zt+1 with Bayes rule

  18. Multivariate Computations • Linear transformations of gaussians • If x ~ N(m,S), y = A x + b • Then y ~ N(Am+b, ASAT) • Consequence • If x ~ N(mx,Sx), y ~ N(my,Sy), z=x+y • Then z ~ N(mx+my,Sx+Sy) • Conditional of gaussian • If [x1,x2] ~ N([m1 m2],[S11,S12;S21,S22]) • Then on observing x2=z, we havex1 ~ N(m1-S12S22-1(z-m2), S11-S12S22-1S21)

  19. Predict Step • Linear transformation rule: • xt+1 ~ N(Fmt + g, F St FT+ Sv) • Let these be m’ and S’

  20. Update step • Given m’ and S’, and new observation z • Compute the final distribution mt+1 and St+1 using the conditional distribution formulas

  21. Presentation

  22. xt zt m’ a S’ B BT C = N ( , ) Deriving the Update Rule (1) Unknowns a,B,C xt ~ N(m’ , S’) (2) Assumption zt | xt ~ N(H xt, SW) (3) Assumption zt | xt ~ N(a-BTS’-1xt, C-BTS’-1B) (4) Conditioning (1) H xt = a-BTS’-1(xt-m’) => a=Hm’, BT=HS’ (5) Set mean (4)=(3) C-BTS’-1B = SW => C = H S’ HT + SW (6) Set cov. (4)=(3) xt | zt ~ N(m’-BC-1(zt-a), S’-BC-1BT) (7) Conditioning (1) mt = m’ - S’HTC-1(zt-Hm’) (8,9) Kalman filter St = S’ - S’HTC-1HS’

  23. Filtering Variants • Extended Kalman Filter (EKF) • Unscented Kalman Filter (UKF) • Particle Filtering

  24. Extended/Unscented Kalman Filter • Dynamics / sensing model is nonlinearxt+1 = f(xt) + vyt+1 = h(xt+1) + w • Gaussian noise, gaussian state estimate as before • Strategy: • Propagate means as usual • What about covariances & conditioning?

  25. Extended Kalman Filter • Approach: linearize about current estimateF = f/x(xt)H = h/x(xt+1) • Use Jacobians to propagate covariance matrices and perform conditioning Jacobian = matrix of partial derivatives

  26. Unscented Kalman Filter • Approach: propagate a set of points with the same mean/covariance as the input, reconstruct gaussian

  27. Next Class • Particle Filter • Non-Gaussian distributions • Nonlinear observation/transition function • Readings • Rekleitis (2004) • P.R.M. Chapter 9

  28. Midterm Project Report Schedule (tentative) • Tuesday • Changsi and Yang: Indoor person following • Roland: Person tracking • Jiaan and Yubin: Indoor mapping • You-wei: Autonomous driving • Thursday • Adrija: Dynamic collision checking with point clouds • Damien: Netlogo • Santhosh and Yohanand: Robot chess • Jingru and Yajia: Ball collector with robot arm • Ye: UAV simulation

More Related