jose luis blanco javier gonz lez juan antonio fern ndez madrigal n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Jose-Luis Blanco , Javier González, Juan-Antonio Fernández-Madrigal PowerPoint Presentation
Download Presentation
Jose-Luis Blanco , Javier González, Juan-Antonio Fernández-Madrigal

Loading in 2 Seconds...

play fullscreen
1 / 47

Jose-Luis Blanco , Javier González, Juan-Antonio Fernández-Madrigal - PowerPoint PPT Presentation


  • 179 Views
  • Uploaded on

Jose-Luis Blanco , Javier González, Juan-Antonio Fernández-Madrigal. Dpt. of System Engineering and Automation. University of Málaga (Spain). An Optimal Filtering Algorithm for Non-Parametric Observation Models in Robot Localization. May 19-23 Pasadena, CA (USA). Outline of the talk.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Jose-Luis Blanco , Javier González, Juan-Antonio Fernández-Madrigal' - eamon


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
jose luis blanco javier gonz lez juan antonio fern ndez madrigal
Jose-Luis Blanco, Javier González, Juan-Antonio Fernández-Madrigal

Dpt. of System Engineering

and Automation

University of Málaga

(Spain)

An Optimal Filtering Algorithm for Non-Parametric

Observation Models in Robot Localization

May 19-23

Pasadena, CA (USA)

outline of the talk
Outline of the talk

1. Introduction

2. The proposed method

3. Experimental results

4. Conclusions

outline of the talk1
Outline of the talk

1. Introduction

2. The proposed method

3. Experimental results

4. Conclusions

1 introduction

p(y|x) : observation

likelihood

p(x|y) : posterior belief

1. Introduction

The addressed problem: Bayesian filtering

p(x) : prior belief

Two choices determine the tools suitable to solve this problem:

  • The representation of the prior/posterior densities: Gaussian vs. samples.
  • Assumptions about the form of the likelihood.
1 introduction1

Any arbitrary function

(need to evaluate

it pointwise)

Weighted, random samples

(particle filter)

1. Introduction

In this work:

Representation of pdfs?

Observation likelihood?

1 introduction2
1. Introduction

The role of the proposal distribution in particle filters

The basic particle filtering algorithm:

p(x) : prior belief

What happens to each particle?

1 introduction3

Draw new particles from

the proposal distribution:

1. Introduction

The role of the proposal distribution in particle filters

The basic particle filtering algorithm:

What happens to each particle?

1 introduction4
1. Introduction

The role of the proposal distribution in particle filters

The basic particle filtering algorithm:

  • Weights are updated, depending on:
  • The observation likelihood, and
  • The proposal distribution.

What happens to each particle?

1 introduction5
1. Introduction

The role of the proposal distribution in particle filters

The basic particle filtering algorithm:

  • Weights are updated, depending on:
  • The observation likelihood, and
  • The proposal distribution.

p(y|x) : observation

likelihood

What happens to each particle?

1 introduction6
1. Introduction

The role of the proposal distribution in particle filters

The basic particle filtering algorithm:

p(x|y) : posterior belief

The goal  To approximate as well as possible the posterior

How much does the choice of the proposal distribution matter?

1 introduction7
1. Introduction

The role of the proposal distribution in particle filters

The basic particle filtering algorithm:

q(·) : proposal distribution

p(x|y) : posterior belief

How much does the choice of the proposal distribution matter?

1 introduction8
1. Introduction

The role of the proposal distribution in particle filters

The basic particle filtering algorithm:

For a large mismatch between proposal and posterior,

the particles represent the density very poorly:

q(·) : proposal distribution

p(x|y) : posterior belief

How much does the choice of the proposal distribution matter?

1 introduction9

It is common to use the transition model as proposal:

We refer to this choice as the standard proposal. It is far from optimal.

1. Introduction

The role of the proposal distribution in particle filters

The proposal distribution q(·) is the key

for the efficiency of a particle filter!

[Doucet et al. 2000] introduced the optimal proposal.

1 introduction10
1. Introduction

Relation of our method to other Bayesian filtering approaches:

outline of the talk2
Outline of the talk

1. Introduction

2. The proposed method

3. Experimental results

4. Conclusions

2 the proposed method
2. The proposed method

Our method:

  • A particle filter based on the optimal proposal [Doucet et al. 2000].
  • Can deal with non-parameterized observation models, using rejection sampling to approximate the actual densities.
  • Integrates KLD-sampling [Fox 2003] for a dynamic sample size
  • (optional: it’s not fundamental to the approach).
  • The weights of all the samples are always equal.
slide17

2. The proposed method

The theoretical model for each step of our method is this sequence of operations:

Duplication  SIR with optimal proposal Fixed/Dyn. sample-size resampling

slide18

2. The proposed method

The theoretical model for each step of our method is this sequence of operations:

Duplication  SIR with optimal proposal Fixed/Dyn. sample-size resampling

slide19

2. The proposed method

The theoretical model for each step of our method is this sequence of operations:

Duplication  SIR with optimal proposal Fixed/Dyn. sample-size resampling

slide20

2. The proposed method

The theoretical model for each step of our method is this sequence of operations:

Duplication  SIR with optimal proposal Fixed/Dyn. sample-size resampling

slide21

Particles at time t-1

2. The proposed method

Illustrative example of how our method works:

t

t–1

[1]

[2]

[3]

[4]

slide22

2. The proposed method

Illustrative example of how our method works:

t

t–1

Group [1]

[1]

[2]

[3]

[4]

Each particle propagates in time probabilistically:

this is the reason of the duplication

slide23

2. The proposed method

Illustrative example of how our method works:

t

t–1

Group [2]

Group [1]

[1]

[2]

[3]

[4]

Each particle propagates in time probabilistically:

this is the reason of the duplication

slide24

2. The proposed method

Illustrative example of how our method works:

t

t–1

Group [2]

Group [1]

[1]

[2]

[3]

Group [3]

[4]

Each particle propagates in time probabilistically:

this is the reason of the duplication

slide25

2. The proposed method

Illustrative example of how our method works:

t

t–1

Group [2]

Group [1]

[1]

[2]

[3]

Group [3]

[4]

Group [4]

Each particle propagates in time probabilistically:

this is the reason of the duplication

slide26

2. The proposed method

Illustrative example of how our method works:

t

t–1

Group [2]

Group [1]

[1]

[2]

Observation

likelihood

[3]

Group [3]

[4]

Too distant particles do not

contribute to the posterior!

Group [4]

The observation likelihood states what particles are really important…

slide27

2. The proposed method

Illustrative example of how our method works:

t

t–1

Group [2]

Group [1]

[1]

[2]

Observation

likelihood

[3]

Group [3]

[4]

Group [4]

We can predict which groups will be more important,

before really generating the new samples!

2 the proposed method1

The weight does not depend on the actual

value of the particle.

2. The proposed method

The optimal proposal distribution:

Importance weights update as:

slide29

2. The proposed method

Illustrative example of how our method works:

t

t–1

Group [2]

Group [1]

[1]

[2]

Observation

likelihood

Group [1]  55%

Group [2]  0%

[3]

Group [3]  45%

Group [4]  0%

Group [3]

[4]

Group [4]

slide30

2. The proposed method

Illustrative example of how our method works:

t

t–1

Group [2]

Group [1]

[1]

[2]

Observation

likelihood

[3]

A fixed or dynamic number

of samples can be generated

in this way.

Group [3]

[4]

Group [4]

Given the predictions, we draw particles according to the optimal proposal,

only for those groups that really contribute to the posterior.

slide31

2. The proposed method

Comparison to… basic Sequential Importance Resampling (SIR)

slide32

2. The proposed method

Comparison to… basic Sequential Importance Resampling (SIR)

t

t–1

[1]

[2]

Observation

likelihood

[3]

[4]

1 particle  1 particle Prone to particle depletion!

slide33

2. The proposed method

Comparison to… Auxiliary Particle Filter (APF) [Pitt & Shephard, 1999]

slide34

2. The proposed method

Comparison to… Auxiliary Particle Filter (APF) [Pitt & Shephard, 1999]

t

t–1

[1]

[2]

Observation

likelihood

[3]

[4]

1 particle  variable number of particles

Propagation does not use optimal proposal!

outline of the talk3
Outline of the talk

1. Introduction

2. The proposed method

3. Experimental results

3.1. Numerical simulation

3.2. Robot localization

4. Conclusions

slide36

3.1. Results

Numerical simulations:

A Gaussian model for both the filtered density and the observation model.

We compare the closed form optimal solution (Kalman filter) to:

 PF using the “standard” proposal distribution.

 Auxiliary PF method [Pitt & Shephard, 1999].

 This work (“optimal” PF).

(Fixed sample size for these experiments)

slide37

x axis: particles

y axis: weights

Actual pdf from Kalman filter.

Approximated pdf (histogram) from particles.

Kullback-Leibler distance (KLD) for

increasing number of samples.

3.1. Results

Results from the numerical simulations, and comparison to 1D Kalman filter:

slide38

3.1. Results

Results from the numerical simulations, and comparison to 1D Kalman filter:

outline of the talk4
Outline of the talk

1. Introduction

2. The proposed method

3. Experimental results

3.1. Numerical simulation

3.2. Robot localization

4. Conclusions

slide40

1 m

End

Start

Robot path

during localization

3.2. Results

Localization with real data:

Path of the robot: ground truth from a RBPF with a large number of particles.

slide41

Average positioning error (meters)

10

Standard proposal PF

Our optimal PF

1

0.1

0.01

1

10

100

Number of particles

3.2. Results

Localization with real data:

Average errors in tracking (the particles are approximately at the

right position from the beginning).

slide42

1

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

1

2

10

10

3.2. Results

Localization with real data:

Ratio of convergence from global localization:

Our method

Ratio of

convergence

success

SIR with “standard” proposal

Initial sample size

(particles/m2)

outline of the talk5
Outline of the talk

1. Introduction

2. The proposed method

3. Experimental results

4. Conclusions

conclusions
Conclusions
  • A new particle filter algorithm has been introduced.
  • It can cope with non-parameterized observation likelihoods, and a
  • dynamic number of particles.
  • Compared to standard SIR, it provides more robust global localization
  • and pose tracking for similar computation times.
  • It is a generic algorithm: can be applied to other problems in
  • robotics, computer vision, etc.
slide46
Source code (MRPT C++ libs), datasets, slides and instructions to reproduce the experiments available online:Finally…

http://mrpt.sourceforge.net/

papers

ICRA 08

jose luis blanco javier gonz lez juan antonio fern ndez madrigal1
Jose-Luis Blanco, Javier González, Juan-Antonio Fernández-Madrigal

Dpt. of System Engineering

and Automation

University of Málaga

(Spain)

An Optimal Filtering Algorithm for Non-Parametric

Observation Models in Robot Localization

Thanks for your attention!