1 / 10

Rao-Blackwellised Particle Filtering for Dynamic Bayesian Network

This paper introduces the concept of Rao-Blackwellised Particle Filtering for Dynamic Bayesian Network, a numerical approximation scheme used to estimate the conditional posterior distribution of hidden variables in a DBN. The algorithm uses importance sampling and marginalization techniques to reduce the size of the space that needs to be sampled, improving computational efficiency. The method is demonstrated through a robot localization and map learning problem.

caseytipton
Download Presentation

Rao-Blackwellised Particle Filtering for Dynamic Bayesian Network

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Rao-Blackwellised Particle Filtering for Dynamic Bayesian Network Arnaud Doucet Nando de Freitas Kevin Murphy Stuart Russell

  2. Introduction • Sampling in high dimension • Model has tractable substructure • Analytically marginalized out, conditional on certain other nodes being imputed • Using Kalman filter, HMM filter, junction tree algorithm for general DBNs • Reduce size of the space over we need to sample • Rao-Blackwellisation • marginalize out some of the variables

  3. Problem Formulation • general state space model/DBN with hidden variables and observed variables . • is a Markov process of initial distribution • Transition equation • Observations • Estimate • recursion • not analytically, numerical approximation scheme

  4. Cont’d • Divide hidden variables into two groups and • Conditional posterior distribution is analytically tractable. • Focus on estimating (reduced dimension) • Decomposition of posterior from chain rule • Marginal distribution

  5. Importance sampling and RAO-Blackwellisation • Sample N i.i.d. random samples(particles) according to • Empirical estimate • Expected value of any function of hidden variables

  6. Cont’d • Strong law of large numbers • converges almost surely towards as • Central limit theorem

  7. Importance Sampling • impossible to sample efficiently from target • Importance distribution q • Easy to sample • p>0 implies q>0

  8. Cont’d • The case where one can marginalize out analytically, propose alternative estimate for • Alternative importance sampling estimate of • To reach a given precision, will require a reduced number N of samples over

  9. Rao-Blackwellised particle filters • Sequential importance sampling step • For i=1,…,N sample: and set: • For i=1,…,N evaluate importance weights up to a normalizing constant: • For i=1,…,N normalize importance weights: • Selection step • multiply/suppress samples with high/low importance weights to obtain random samples approximately distributed • MCMC step • Apply a markov transition kernel with invariant distribution given by to obtain to obtain

  10. Robot localization and MAP building • Problem of concurrent localization and map learning • Location • Color of each grid cell • Observation • Basic idea of algorithm • Sample with PF • Marginalize out since they are conditionally independent given

More Related