1 / 12

On Experimental Research in Sampling-based Motion Planning

On Experimental Research in Sampling-based Motion Planning. Roland Geraerts Workshop on Benchmarks in Robotics Research IROS 2006. c. c’. Probabilistic Roadmap Method. Construction ( G = V , E ) Loop c  a free sample add c to the vertices V N c  a set of nodes

rowdy
Download Presentation

On Experimental Research in Sampling-based Motion Planning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. On Experimental Research in Sampling-based Motion Planning Roland Geraerts Workshop on Benchmarks in Robotics Research IROS 2006

  2. c c’ Probabilistic Roadmap Method Construction (G =V,E ) Loop c a free sample add c to the vertices V Nc a set of nodes for allc’ in Nc in increasing distance ifc’ and c are not connected in Gthen if local path between c and c’ exists then add the edge c’c to E Free space Forbidden space Sample Colliding path Local path c c c c’ c c’

  3. Probabilistic Roadmap Method Construction (G =V,E ) Loop c a free sample add c to the vertices V Nc a set of nodes for allc’ in Nc in increasing distance ifc’ and c are not connected in Gthen if local path between c and c’ exists then add the edge c’c to E Query connect sample s and g to roadmap Dijkstra’s shortest path Free space Forbidden space Sample Local path Start / goal Shortest path

  4. Methods • General setup • SAMPLE • Implemented in C++ using VS.NET 2003 • Easy API to add techniques • GUI: easily set up experiments • Repeatability: load/save an experiment • Easily comparing different techniques • Easily examining parameter of a technique • Automatically collect/process data of experiment • Demo

  5. Methods • Test problems • Conclusions were often too general due to limited set of problems • Also choose worst-case problems

  6. Methods • Interchangeability • Libraries taking take of common functionality • Collision checking, visualization Callisto:http://www.cs.uu.nl/dennis/callisto/callisto.html[Nieuwenhuisen] • Graph utilities Atlas: http://www.cs.uu.nl/dennis/atlas/atlas.html[Nieuwenhuisen] • Nearest neighbor MPNN: http://msl.cs.uiuc.edu/~yershova/mpnn/mpnn.htm[Yershova, Lavalle] • Deterministic sampling methods http://msl.cs.uiuc.edu/~yershova/so3sampling/so3sampling.htm[Yershova] • Rotation in 3D http://www.kuffner.org/james/software [Kuffner]

  7. Methods • Interchangeability • Source code of motion planning framework • Motion planning kit MPK: http://ai.stanford.edu/~mitul/mpk [Latombe] • Move3D http://www.laas.fr/~nic/Move3D[Siméon] • Motion strategy library MSL: http://msl.cs.uiuc.edu/msl [Lavalle] • Unfortunately, code is often not up-to-date

  8. Methods • Interchangeability • Sources • Geometry of environment/robot: VRML • Problem descriptions: XML • Advantages of using existing languages • Well documented • Parsers/type checkers are available for all platforms • Existing programs for creating/editing the files

  9. Methods • Interchangeability • Sources of geometry files and benchmarks • http://www.give-lab.cs.uu.nl/movie/moviemodels [MOVIE] • http://faculty.cs.tamu.edu/amato/dsmft/benchmarks[Amato] • http://mpb.ce.unipr.it/[Reggiani] • Problems should be put online when article is published

  10. Results • Evaluation of solution • Compare new technique with existing ones • Pitfall: parameter tuning only for the new technique • Compare against optimal solution • Often only known for trivial cases • Approximate optimal solution by many runs • User studies

  11. Results • Statistics • Large variances in running times • Complicates statistical analysis • Makes analysis unreliable • Is undesirable from a user’s point of view • Perform large number of runs • Provide more statistical info, e.g. box plots • Deterministic versus randomized techniques • Deterministic techniques can respond sensitively to small changes in the problem setting

  12. Conclusion • Automate conducting experiments as much as possible • Choose test problems carefully • Source code, software components and problem data should be made available • Use standard file formats (VRML, XML) • Provide an extensive statistical analysis

More Related