This presentation is the property of its rightful owner.
1 / 16

# Keywords PowerPoint PPT Presentation

Keywords. Robot Localization – Find where the robot is. Posterior/belief – Next-state estimation. Markov Process – Next-state is decided only by the last state. Bayesian Filter – Future state estimation based on the previous collected data. Particle Filter – Sample based Bayesian Filter.

Keywords

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

## Keywords

Robot Localization – Find where the robot is.

Posterior/belief – Next-state estimation.

Markov Process – Next-state is decided only by the last state.

Bayesian Filter – Future state estimation based on the previous collected data.

Particle Filter – Sample based Bayesian Filter.

Adaptive Particle Filter – Particle Filter with autonomous adaptable samples.

Robot Localization Based on the Adaptive Particle Filter

## Introduction

At each time iteration:

Most of the previous Particle Filter implementations use fixed number of samples.

– proposed to increase efficiency.

Robot Localization Based on the Adaptive Particle Filter

## Bayes Filter

Future state estimation based on the previous collected data.

At time t:

Bel(xt) = p(xt | zt, ut-1, zt-1, ut-2 . . . , u0, z0)

X – state

U – movement

Z – observation

Robot Localization Based on the Adaptive Particle Filter

## Particle Filter

Sample based Bayesian Filter.

Given a set of samples each representing the possibility of that sample being at the correct state (robot location).

At time t:

St = { < xt(i), wt(i) > | i = 1, . . . , n }

x – state

w – importance weight

n – number of samples

Robot Localization Based on the Adaptive Particle Filter

Robot Localization Based on the Adaptive Particle Filter

## Adaptive Particle Filter

When we need it? – When dealing with Identical paths, corners.

After moving 5m –

Keep moving, robot still confused –

After moving 55m –

Robot Localization Based on the Adaptive Particle Filter

## Adaptive Particle Filter (cont.)

Kullback-Leibler distance (KLD) is used to measure the difference between the sample-based approximation and the true posterior.

– distribution difference (error).

Calculate the sample size (nx) needed to approximate a distribution with the error < .

Robot Localization Based on the Adaptive Particle Filter

## Experimental Results

Better matching with the right with less samples

smaller localization error with less samples

Robot Localization Based on the Adaptive Particle Filter

## Example of Adaptive Particle Filter

Localization at four different time.

(a), (c), (d) – less ambiguous, the robot used less samples.

(b) – Sensor loss, the robot used more samples because of uncertainty.

Robot Localization Based on the Adaptive Particle Filter

## Particle Filter Implementations

Video from Dieter Fox (Adaptive PF on R-T)

Video from Sebastian Thrun (Fixed-sample MCL)

Robot Localization Based on the Adaptive Particle Filter

## Questions

1. Can you clarify the use of 'k' bins? It appears that each bin has a dimension that represents a sector on the known world. If a point is found on the world it is classified within the corresponding bin. (from paper: "fixed bin size [delta] of 50cm ?50cm ?10deg"). Basically, is this assumption correct?

– The map is divided into bins (space if in high dimensions).

2. I couldn't find, probably overlooked, how the map was actually generated. It almost looks like they let the robot wander around the halls using its sonar sensors to map out the world. How was the "known world" generated?

– The map was pre-loaded into the robot. If the world is unknown the problem becomes “mapping”.

Robot Localization Based on the Adaptive Particle Filter

## Questions (cont.)

3. Were the calculations performed at run time as the robot traversed the hallway or was the robot just logging data for post-operation evaluation? This question comes from when the robot looses sensory information. They say they did this by deleting data across a random time interval. This means that they either remoted into the robot and deleted information from memory, or when the data was brought to another computer they removed the data before processing. Can you clarify?

– I would say someone deleted partial observation data off-line then re-run it on simulator.

Robot Localization Based on the Adaptive Particle Filter

## Questions (cont.)

4. On page three the author makes the statement “At the beginning, the belief Bel(x0) is initialized with the prior knowledge about the state of the system” what prior knowledge do they use to initialize the belief?

– Each sample has equal weight at the very beginning. (uniform distribution)

5. In figure 3 page 9, they say it takes 55 meters before the robot figures out where it is located, in real world applications isn’t that too long to calculate the current position?

– Yes, it took too long. That’s why this paper was written and why the robot soccer field isn’t built like that. 

Robot Localization Based on the Adaptive Particle Filter

## Questions (cont.)

6. On page 17 they discuss real-time performance, from that section it appears as though this algorithm would not work very well in a real-time setting, how could they improve its performance in real-time settings?

– This section is talking about robot tracking with the known initial state.

– No big performance improvement because the state uncertainty does not vary much during position tracking.

7. How distance between KLD-sampling is measured by Kullback-Leibler Distance?

8. What are difference between KLD-Based adaptation and likelihood-based adaptation?

Robot Localization Based on the Adaptive Particle Filter

## Questions (cont.)

9. How ‘hardware dependent’ do you think the parameters are? i.e. if you changed to using cameras as input devices instead of lasers and sonar how much would you have to change the parameters?

– Vision process needs much more computation than sonar/laser process…

Robot Localization Based on the Adaptive Particle Filter

## Reference

Q1 – Q3 : Mark Holak

Q4 – Q6 : Greg McChesney

Q7 – Q8 : Mahdi Moghadasi

Q9 : Matthew Wilhelm

Robot Localization Based on the Adaptive Particle Filter