1 / 128

Probabilistic Techniques for Mobile Robot Navigation

Probabilistic Techniques for Mobile Robot Navigation. Wolfram Burgard University of Freiburg Department of Computer Science Autonomous Intelligent Systems http://www.informatik.uni-freiburg.de/~burgard/.

aaralyn
Download Presentation

Probabilistic Techniques for Mobile Robot Navigation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Probabilistic Techniquesfor Mobile Robot Navigation Wolfram Burgard University of Freiburg Department of Computer Science Autonomous Intelligent Systems http://www.informatik.uni-freiburg.de/~burgard/ Special Thanks to Frank Dellaert, Dieter Fox, Giorgio Grisetti, Dirk Haehnel, Cyrill Stachniss, Sebastian Thrun, …

  2. Probabilistic Robotics

  3. Robotics Today

  4. “Humanoids” Overcoming the uncanny valley [Courtesy by Hiroshi Ishiguro]

  5. Humanoid Robots [Courtesy by Sven Behnke]

  6. DARPA Grand Challenge [Courtesy by Sebastian Thrun]

  7. DCG 2007

  8. Robot Projects: Interactive Tour-guides Rhino: Albert: Minerva:

  9. Minerva

  10. Robot Projects: Acting in the Three-dimensional World Herbert: Zora: Groundhog:

  11. Range Data Nature of Data Odometry Data

  12. Probabilistic Techniques in Robotics • Perception = state estimation • Action = utility maximization Key Question • How to scale to higher-dimensional spaces

  13. Axioms of Probability Theory Pr(A) denotes probability that proposition A is true.

  14. Bayes Formula

  15. Normalization Algorithm:

  16. Simple Example of State Estimation • Suppose a robot obtains measurement z • What is P(open|z)?

  17. count frequencies! Causal vs. Diagnostic Reasoning • P(open|z) is diagnostic. • P(z|open) is causal. • Often causal knowledge is easier to obtain. • Bayes rule allows us to use causal knowledge:

  18. Example • P(z|open) = 0.6 P(z|open) = 0.3 • P(open) = P(open) = 0.5 • z raises the probability that the door is open.

  19. Combining Evidence • Suppose our robot obtains another observation z2. • How can we integrate this new information? • More generally, how can we estimateP(x| z1...zn )?

  20. Recursive Bayesian Updating Markov assumption: znis independent of z1,...,zn-1if we know x.

  21. Example: Second Measurement • P(z2|open) = 0.5 P(z2|open) = 0.6 • P(open|z1)=2/3 • z2lowers the probability that the door is open.

  22. Actions • Often the world is dynamic since • actions carried out by the robot, • actions carried out by other agents, • or just the time passing by change the world. • How can we incorporate such actions?

  23. Typical Actions • The robot turns its wheels to move • The robot uses its manipulator to grasp an object • Plants grow over time… • Actions are never carried out with absolute certainty. • In contrast to measurements, actions generally increase the uncertainty.

  24. Modeling Actions • To incorporate the outcome of an action u into the current “belief”, we use the conditional pdf P(x|u,x’) • This term specifies the pdf that executing u changes the state from x’ to x.

  25. Example: Closing the door

  26. State Transitions P(x|u,x’) for u = “close door”: If the door is open, the action “close door” succeeds in 90% of all cases.

  27. Integrating the Outcome of Actions Continuous case: Discrete case:

  28. Example: The Resulting Belief

  29. Bayes Filters: Framework • Given: • Stream of observations z and action data u: • Sensor modelP(z|x). • Action modelP(x|u,x’). • Prior probability of the system state P(x). • Wanted: • Estimate of the state X of a dynamical system. • The posterior of the state is also called Belief:

  30. Markov Assumption Underlying Assumptions • Static world • Independent noise • Perfect model, no approximation errors

  31. z = observation u = action x = state Bayes Filters Bayes Markov Total prob. Markov Markov

  32. Bayes Filter Algorithm • Algorithm Bayes_filter( Bel(x),d ): • h=0 • Ifd is a perceptual data item z then • For all x do • For all x do • Else ifd is an action data item uthen • For all x do • ReturnBel’(x)

  33. Bayes Filters are Familiar! • Kalman filters • Discrete filters • Particle filters • Hidden Markov models • Dynamic Bayesian networks • Partially Observable Markov Decision Processes (POMDPs)

  34. Summary • Bayes rule allows us to compute probabilities that are hard to assess otherwise. • Under the Markov assumption, recursive Bayesian updating can be used to efficiently combine evidence. • Bayes filters are a probabilistic tool for estimating the state of dynamic systems.

  35. Dimensions of Mobile Robot Navigation SLAM localization mapping integrated approaches active localization exploration motion control

  36. Probabilistic Localization

  37. p(x|u,x’) s’ s’ a a observation x laser data p(z|x) p(o|s,m) Localization with Bayes Filters

  38. Localization with Sonars in an Occupancy Grid Map

  39. Resulting Beliefs

  40. What is the Right Representation? • Kalman filters • Multi-hypothesis tracking • Grid-based representations • Topological approaches • Particle filters

  41. Monte-Carlo Localization • Set of N samples {<x1,w1>, … <xN,wN>} containing a state xand animportance weight w • Initialize sample set according to prior knowledge • For each motion u do: • Sampling: Generate from each sample a new pose according to the motion model • For each observation z do: • Importance sampling: weigh each sample with the likelihood • Re-sampling: Draw N new samples from the sample set according to the importance weights

  42. Particle Filters Represent density by random samples Estimation of non-Gaussian, nonlinear processes Monte Carlo filter, Survival of the fittest, Condensation, Bootstrap filter, Particle filter Filtering: [Rubin, 88], [Gordon et al., 93], [Kitagawa 96] Computer vision: [Isard and Blake 96, 98] Dynamic Bayesian Networks: [Kanazawa et al., 95]

  43. Mobile Robot Localization with Particle Filters

  44. MCL: Sensor Update

  45. PF: Robot Motion

  46. 1. Draw from 2. Draw from 3. Importance factor for Particle Filter Algorithm 4. Re-sample

  47. Beam-based Sensor Model

  48. Typical Measurement Errors of an Range Measurements • Beams reflected by obstacles • Beams reflected by persons / caused by crosstalk • Random measurements • Maximum range measurements

  49. Proximity Measurement • Measurement can be caused by … • a known obstacle. • cross-talk. • an unexpected obstacle (people, furniture, …). • missing all obstacles (total reflection, glass, …). • Noise is due to uncertainty … • in measuring distance to known obstacle. • in position of known obstacles. • in position of additional obstacles. • whether obstacle is missed.

  50. zexp zexp 0 zmax 0 zmax Beam-based Proximity Model Measurement noise Unexpected obstacles

More Related