1 / 40

Intro to Motion Planning II

Intro to Motion Planning II. Maxim Likhachev University of Pennsylvania. Graph Construction. Cell decomposition (last lecture) - X-connected grids - lattice-based graphs Skeletonization of the environment/C-Space (first part of today’s lecture). replicate action template online.

clark
Download Presentation

Intro to Motion Planning II

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Intro to Motion Planning II Maxim Likhachev University of Pennsylvania

  2. Graph Construction • Cell decomposition (last lecture) • - X-connected grids • - lattice-based graphs • Skeletonization of the environment/C-Space (first part of today’s lecture) replicate action template online University of Pennsylvania

  3. Skeletonization • Visibility Graphs [Wesley & Lozano-Perez ’79] • - based on idea that the shortest path consists of obstacle-free straight line segments connecting all obstacle vertices and start and goal C-space or environment suboptimal path goal configuration start configuration University of Pennsylvania

  4. Skeletonization • Visibility Graphs • - based on idea that the shortest path consists of obstacle-free straight line segments connecting all obstacle vertices and start and goal Assumption? C-space or environment suboptimal path goal configuration start configuration University of Pennsylvania

  5. Skeletonization • Visibility Graphs • - based on idea that the shortest path consists of obstacle-free straight line segments connecting all obstacle vertices and start and goal Assumption? C-space or environment suboptimal path goal configuration start configuration Proof for this case? University of Pennsylvania

  6. Skeletonization • Visibility Graphs [Wesley & Lozano-Perez ’79] • - construct a graph by connecting all vertices, start and goal by obstacle-free straight line segments (graph is O(n2), where n - # of vert.) • - search the graph for a shortest path University of Pennsylvania

  7. Skeletonization • Visibility Graphs • - advantages: • - independent of the size of the environment • - disadvantages: • - path is too close to obstacles • - hard to deal with non-uniform cost function • - hard to deal with non-polygonal obstacles • - can get expensive in high-D with a lot of obstacles University of Pennsylvania

  8. Skeletonization the example above is borrowed from “AI: A Modern Approach” by S. RusselL & P. Norvig • Voronoi diagrams [Rowat ’79] • - voronoi diagram: set of all points that are equidistant to two nearest obstacles • - based on the idea of maximizing clearance instead of minimizing travel distance University of Pennsylvania

  9. Skeletonization the example above is borrowed from “AI: A Modern Approach” by S. RusselL & P. Norvig • Voronoi diagrams • - compute voronoi diagram (O (n log n), where n - # of invalid configurations) • - add a shortest path segment from start to the nearest segment of voronoi diagram • - add a shortest path segment from goal to the nearest segment of voronoi diagram • - compute shortest path in the graph University of Pennsylvania

  10. Skeletonization the example above is borrowed from “AI: A Modern Approach” by S. RusselL & P. Norvig • Voronoi diagrams • - advantages: • - tends to stay away from obstacles • - independent of the size of the environment • - disadvantages: • - difficult to construct in higher than 2-D • - can result in highly suboptimal paths University of Pennsylvania

  11. Skeletonization the example above is borrowed from “AI: A Modern Approach” by S. RusselL & P. Norvig • Voronoi diagrams • - advantages: • - tends to stay away from obstacles • - independent of the size of the environment • - disadvantages: • - difficult to construct in higher than 2-D • - can result in highly suboptimal paths In which environments? University of Pennsylvania

  12. Skeletonization the example above is borrowed from “AI: A Modern Approach” by S. RusselL & P. Norvig • Probabilistic roadmaps [Kavraki et al. ’96] • - construct a graph by: • - randomly sampling valid configurations • - adding edges in between the samples that are easy to connect with a simple local controller (e.g., follow straight line) • - add start and goal configurations to the graph with appropriate edges • - compute shortest path in the graph University of Pennsylvania

  13. Skeletonization the example above is borrowed from “AI: A Modern Approach” by S. RusselL & P. Norvig • Probabilistic roadmaps [Kavraki et al. ’96] • - simple and highly effective in high-D • - very popular • - can result in suboptimal paths, no guarantees on suboptimality • - difficulty with narrow passages • - more in the later lectures on this and other randomized motion planning techniques University of Pennsylvania

  14. Searching Graphs for a Least-cost Path 2 S2 S1 1 2 Sstart 1 Sgoal 1 3 S4 S3 • Once a graph is constructed (from skeletonization or uniform cell decomposition or adaptive cell decomposition or lattice or whatever else), we need to search it for a least-cost path University of Pennsylvania

  15. Searching Graphs for a Least-cost Path • A* search (last lecture) • Dealing with large graphs (this lecture) University of Pennsylvania

  16. Effect of the Heuristic Function ComputePath function while(sgoal is not expanded) remove s with the smallest [f(s) = g(s)+h(s)] from OPEN; • A* Search: expands states in the order of f = g+h values insert s into CLOSED; for every successor s’ of s such that s’ not in CLOSED if g(s’) > g(s) + c(s,s’) g(s’) = g(s) + c(s,s’); insert s’ into OPEN; expansion of s University of Pennsylvania

  17. Effect of the Heuristic Function • A* Search: expands states in the order of f = g+h values Sketch of proof of optimality by induction for consistent h: 1. assume all previously expanded states have optimal g-values 2. next state to expand is s: f(s) = g(s)+h(s) – min among states in OPEN 3. assume g(s) is suboptimal 4. then there must be at least one state s’ on an optimal path from start to s such that it is in OPEN but wasn’t expanded 5. g(s’) + h(s’) ≥ g(s)+h(s) 6. but g(s’) + c*(s’,s) < g(s) => g(s’) + c*(s’,s) + h(s) < g(s) + h(s) => g(s’) + h(s’) < g(s) + h(s) 7. thus it must be the case that g(s) is optimal University of Pennsylvania

  18. Effect of the Heuristic Function an (under) estimate of the cost of a shortest path from s to sgoal g(s) the cost of a shortest path from sstart to s found so far h(s) … S … S1 Sstart Sgoal … S2 • A* Search: expands states in the order of f = g+h values • Dijkstra’s: expands states in the order of f = g values (pretty much) • Intuitively: f(s) – estimate of the cost of a least cost path from start to goal via s University of Pennsylvania

  19. Effect of the Heuristic Function an (under) estimate of the cost of a shortest path from s to sgoal g(s) the cost of a shortest path from sstart to s found so far h(s) … S … S1 Sstart Sgoal … S2 • A* Search: expands states in the order of f = g+h values • Dijkstra’s: expands states in the order of f = g values (pretty much) • Weighted A*: expands states in the order of f = g+εh values, ε > 1 = bias towards states that are closer to goal University of Pennsylvania

  20. Effect of the Heuristic Function … … What are the states expanded? • Dijkstra’s: expands states in the order of f = g values sstart sgoal University of Pennsylvania

  21. Effect of the Heuristic Function … … What are the states expanded? • A* Search: expands states in the order of f = g+h values sstart sgoal University of Pennsylvania

  22. Effect of the Heuristic Function … … • A* Search: expands states in the order of f = g+h values for large problems this results in A* quickly running out of memory (memory: O(n)) sstart sgoal University of Pennsylvania

  23. Effect of the Heuristic Function … … • Weighted A* Search: expands states in the order of f = g+εh values, ε > 1 = bias towards states that are closer to goal what states are expanded? – research question sstart sgoal key to finding solution fast: shallow minima for h(s)-h*(s) function University of Pennsylvania

  24. Effect of the Heuristic Function • Weighted A* Search: • trades off optimality for speed • ε-suboptimal: cost(solution) ≤ ε·cost(optimal solution) • in many domains, it has been shown to be orders of magnitude faster than A* • research becomes to develop a heuristic function that has shallow local minima University of Pennsylvania

  25. Effect of the Heuristic Function • Weighted A* Search: • trades off optimality for speed • ε-suboptimal: cost(solution) ≤ ε·cost(optimal solution) • in many domains, it has been shown to be orders of magnitude faster than A* • research becomes to develop a heuristic function that has shallow local minima Is it guaranteed to expand no more states than A*? University of Pennsylvania

  26. Effect of the Heuristic Function ε =2.5 ε =1.5 ε =1.0 13 expansions solution=11 moves 15 expansions solution=11 moves 20 expansions solution=10 moves • Constructing anytime search based on weighted A*: - find the best path possible given some amount of time for planning - do it by running a series of weighted A* searches with decreasing ε: University of Pennsylvania

  27. Effect of the Heuristic Function ε =2.5 ε =1.5 ε =1.0 13 expansions solution=11 moves 15 expansions solution=11 moves 20 expansions solution=10 moves • Constructing anytime search based on weighted A*: - find the best path possible given some amount of time for planning - do it by running a series of weighted A* searches with decreasing ε: • Inefficient because • many state values remain the same between search iterations • we should be able to reuse the results of previous searches University of Pennsylvania

  28. Effect of the Heuristic Function ε =2.5 ε =1.5 ε =1.0 13 expansions solution=11 moves 15 expansions solution=11 moves 20 expansions solution=10 moves • Constructing anytime search based on weighted A*: - find the best path possible given some amount of time for planning - do it by running a series of weighted A* searches with decreasing ε: • ARA* [Likhachev et al. ’03] • - an efficient version of the above that reuses state values within any search iteration • - will learn next lecture after we learn about incremental version of A* University of Pennsylvania

  29. Effect of the Heuristic Function • Useful properties to know: - h1(s), h2(s) – consistent, then: h(s) = max(h1(s),h2(s)) – consistent - if A* uses ε-consistent heuristics: h(sgoal) = 0 and h(s) ≤ ε c(s,succ(s)) + h(succ(s) for all s≠sgoal, then A* is ε-suboptimal: cost(solution) ≤ ε cost(optimal solution) - weighted A* is A* with ε-consistent heuristics - h1(s), h2(s) – consistent, then: h(s) = h1(s)+h2(s) – ε-consistent University of Pennsylvania

  30. Effect of the Heuristic Function • Useful properties to know: - h1(s), h2(s) – consistent, then: h(s) = max(h1(s),h2(s)) – consistent - if A* uses ε-consistent heuristics: h(sgoal) = 0 and h(s) ≤ ε c(s,succ(s)) + h(succ(s) for all s≠sgoal, then A* is ε-suboptimal: cost(solution) ≤ ε cost(optimal solution) - weighted A* is A* with ε-consistent heuristics - h1(s), h2(s) – consistent, then: h(s) = h1(s)+h2(s) – ε-consistent Proof? University of Pennsylvania

  31. Effect of the Heuristic Function • Useful properties to know: - h1(s), h2(s) – consistent, then: h(s) = max(h1(s),h2(s)) – consistent - if A* uses ε-consistent heuristics: h(sgoal) = 0 and h(s) ≤ ε c(s,succ(s)) + h(succ(s) for all s≠sgoal, then A* is ε-suboptimal: cost(solution) ≤ ε cost(optimal solution) - weighted A* is A* with ε-consistent heuristics - h1(s), h2(s) – consistent, then: h(s) = h1(s)+h2(s) – ε-consistent Proof? What is ε? Proof? University of Pennsylvania

  32. Memory Issues • A* does provably minimum number of expansions (O(n)) for finding a provably optimal solution • Memory requirements of A* (O(n)) can be improved though • Memory requirements of weighted A* are often but not always better University of Pennsylvania

  33. Memory Issues • Alternatives: • Depth-First Search (w/o coloring all expanded states): • explore each every possible path at a time avoiding looping and keeping in the memory only the best path discovered so far • Complete and optimal (assuming finite state-spaces) • Memory: O(bm), where b – max. branching factor, m – max. pathlength • Complexity: O(bm), since it will repeatedly re-expand states University of Pennsylvania

  34. Memory Issues • Alternatives: • Depth-First Search (w/o coloring all expanded states): • explore each every possible path at a time avoiding looping and keeping in the memory only the best path discovered so far • Complete and optimal (assuming finite state-spaces) • Memory: O(bm), where b – max. branching factor, m – max. pathlength • Complexity: O(bm), since it will repeatedly re-expand states • Example: • graph: a 4-connected grid of 40 by 40 cells, start: center of the grid • A* expands up to 800 states, DFS may expand way over 420 > 1012 states University of Pennsylvania

  35. Memory Issues • Alternatives: • Depth-First Search (w/o coloring all expanded states): • explore each every possible path at a time avoiding looping and keeping in the memory only the best path discovered so far • Complete and optimal (assuming finite state-spaces) • Memory: O(bm), where b – max. branching factor, m – max. pathlength • Complexity: O(bm), since it will repeatedly re-expand states • Example: • graph: a 4-connected grid of 40 by 40 cells, start: center of the grid • A* expands up to 800 states, DFS may expand way over 420 > 1012 states What if goal is few steps away in a huge state-space? University of Pennsylvania

  36. Memory Issues • Alternatives: • IDA* (Iterative Deepening A*) • set fmax = 1 (or some other small value) • execute (previously explained) DFS that does not expand states with f>fmax • If DFS returns a path to the goal, return it • Otherwise fmax=fmax+1 (or larger increment) and go to step 2 University of Pennsylvania

  37. Memory Issues • Alternatives: • IDA* (Iterative Deepening A*) • set fmax = 1 (or some other small value) • execute (previously explained) DFS that does not expand states with f>fmax • If DFS returns a path to the goal, return it • Otherwise fmax=fmax+1 (or larger increment) and go to step 2 • Complete and optimal in any state-space (with positive costs) • Memory: O(bl), where b – max. branching factor, l – length of optimal path • Complexity: O(kbl), where k is the number of times DFS is called University of Pennsylvania

  38. Memory Issues • Alternatives: • Local search (time for planning is VERY limited) • run A* for T msecs given for planning • if goal was expanded, return path to the goal • otherwise return a path to state s with minimum f(s) in OPEN University of Pennsylvania

  39. Memory Issues • Alternatives: • Local search (time for planning is VERY limited) • run A* for T msecs given for planning • if goal was expanded, return path to the goal • otherwise return a path to state s with minimum f(s) in OPEN • useful when time is small, hard-limited and state-spaces are pretty large • incomplete • to make it complete in undirected graph, need to update heuristics after each search (see Real-Time Adaptive A* [Koenig & Likhachev ’06]) • incomplete in directed graphs University of Pennsylvania

  40. Memory Issues • Alternatives: • Local search (time for planning is VERY limited) • run A* for T msecs given for planning • if goal was expanded, return path to the goal • otherwise return a path to state s with minimum f(s) in OPEN • useful when time is small, hard-limited and state-spaces are pretty large • incomplete • to make it complete in undirected graph, need to update heuristics after each search (see Real-Time Adaptive A* [Koenig & Likhachev ’06]) • incomplete in directed graphs Why? University of Pennsylvania

More Related