1 / 76

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence. Search Continued. Instructors: Anca Dragan University of California, Berkeley [These slides adapted from Dan Klein and Pieter Abbeel ; ai.berkeley.edu ]. Recap: Search. Search. Search problem: States (abstraction of the world) Actions (and costs)

mcgeej
Download Presentation

CS 188: Artificial Intelligence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 188: Artificial Intelligence Search Continued Instructors: Anca Dragan University of California, Berkeley [These slides adapted from Dan Klein and Pieter Abbeel; ai.berkeley.edu]

  2. Recap: Search

  3. Search • Search problem: • States (abstraction of the world) • Actions (and costs) • Successor function (world dynamics): • {s’|s,a->s’} • Start state and goal test

  4. Depth-First Search

  5. Depth-First Search G a c b a c b e d f e d f S h S h e e p d p r p q q q h h r r b c p p q q f f a a q q c c G G a a Strategy: expand a deepest node first Implementation: Fringe is a LIFO stack r

  6. Search Algorithm Properties

  7. Search Algorithm Properties • Complete: Guaranteed to find a solution if one exists? • Return in finite time if not? • Optimal: Guaranteed to find the least cost path? • Time complexity? • Space complexity? • Cartoon of search tree: • b is the branching factor • m is the maximum depth • solutions at various depths • Number of nodes in entire tree? • 1 + b + b2 + …. bm = O(bm) 1 node b b nodes … b2 nodes m tiers bm nodes

  8. Depth-First Search (DFS) Properties • What nodes DFS expand? • Some left prefix of the tree. • Could process the whole tree! • If m is finite, takes time O(bm) • How much space does the fringe take? • Only has siblings on path to root, so O(bm) • Is it complete? • m could be infinite, so only if we prevent cycles (more later) • Is it optimal? • No, it finds the “leftmost” solution, regardless of depth or cost 1 node b b nodes … b2 nodes m tiers bm nodes

  9. Breadth-First Search

  10. Breadth-First Search G a c b e d f S S e e p h d Search Tiers q h h r r b c p r q p p q q f f a a q q c c G G a a Strategy: expand a shallowest node first Implementation: Fringe is a FIFO queue

  11. Breadth-First Search (BFS) Properties • What nodes does BFS expand? • Processes all nodes above shallowest solution • Let depth of shallowest solution be s • Search takes time O(bs) • How much space does the fringe take? • Has roughly the last tier, so O(bs) • Is it complete? • s must be finite if a solution exists, so yes! (if no solution, still need depth != ∞) • Is it optimal? • Only if costs are all 1 (more on costs later) 1 node b b nodes … s tiers b2 nodes bs nodes bm nodes

  12. Video of Demo Maze Water DFS/BFS (part 1)

  13. Video of Demo Maze Water DFS/BFS (part 2)

  14. Iterative Deepening • Idea: get DFS’s space advantage with BFS’s time / shallow-solution advantages • Run a DFS with depth limit 1. If no solution… • Run a DFS with depth limit 2. If no solution… • Run a DFS with depth limit 3. ….. • Isn’t that wastefully redundant? • Generally most work happens in the lowest level searched, so not so bad! b …

  15. Cost-Sensitive Search GOAL a c b e d f START h p r q

  16. Cost-Sensitive Search BFS finds the shortest path in terms of number of actions. It does not find the least-cost path. We will now cover a similar algorithm which does find the least-cost path. GOAL a 2 2 c b 3 2 1 8 e 2 d 3 f 9 8 2 START h 4 2 1 4 15 p r q How?

  17. Uniform Cost Search

  18. Uniform Cost Search G a c b e d f S h S e e p d p r q q h h r r b c p p q q f f a a q q c c G G a a 2 Strategy: expand a cheapest node first: Fringe is a priority queue (priority: cumulative cost) 8 1 2 2 3 2 9 8 1 1 15 0 1 9 3 16 11 5 17 4 11 Cost contours 7 6 13 8 11 10

  19. Uniform Cost Search (UCS) Properties • What nodes does UCS expand? • Processes all nodes with cost less than cheapest solution! • If that solution costs C* and arcs cost at least  ,then the “effective depth” is roughly C*/ • Takes time O(bC*/) (exponential in effective depth) • How much space does the fringe take? • Has roughly the last tier, so O(bC*/) • Is it complete? • Assuming best solution has a finite cost and minimum arc cost is positive, yes! (if no solution, still need depth != ∞) • Is it optimal? • Yes! (Proof via A*) b c  1 … c  2 C*/ “tiers” c  3

  20. Uniform Cost Issues • Remember: UCS explores increasing cost contours • The good: UCS is complete and optimal! • The bad: • Explores options in every “direction” • No information about goal location • We’ll fix that soon! c  1 … c  2 c  3 Start Goal [Demo: empty grid UCS (L2D5)] [Demo: maze with deep/shallow water DFS/BFS/UCS (L2D7)]

  21. Video of Demo Empty UCS

  22. Video of Demo Maze with Deep/Shallow Water --- DFS, BFS, or UCS? (part 1)

  23. Video of Demo Maze with Deep/Shallow Water --- DFS, BFS, or UCS? (part 2)

  24. Video of Demo Maze with Deep/Shallow Water --- DFS, BFS, or UCS? (part 3)

  25. The One Queue • All these search algorithms are the same except for fringe strategies • Conceptually, all fringes are priority queues (i.e. collections of nodes with attached priorities) • Practically, for DFS and BFS, you can avoid the log(n) overhead from an actual priority queue, by using stacks and queues • Can even code one implementation that takes a variable queuing object

  26. Up next: Informed Search • Uninformed Search • DFS • BFS • UCS • Informed Search • Heuristics • Greedy Search • A* Search • Graph Search

  27. Search Heuristics • A heuristic is: • A function that estimates how close a state is to a goal • Designed for a particular search problem • Pathing? • Examples: Manhattan distance, Euclidean distance for pathing 10 5 11.2

  28. Example: Heuristic Function h(x)

  29. Greedy Search

  30. Greedy Search • Expand the node that seems closest… • Is it optimal? • No. Resulting path to Bucharest is not the shortest!

  31. Greedy Search b • Strategy: expand a node that you think is closest to a goal state • Heuristic: estimate of distance to nearest goal for each state • A common case: • Best-first takes you straight to the (wrong) goal • Worst-case: like a badly-guided DFS … b … [Demo: contours greedy empty (L3D1)] [Demo: contours greedy pacman small maze (L3D4)]

  32. Video of Demo Contours Greedy (Empty)

  33. Video of Demo Contours Greedy (Pacman Small Maze)

  34. A* Search

  35. A* Search UCS Greedy A*

  36. Combining UCS and Greedy • Uniform-costorders by path cost, or backward cost g(n) • Greedyorders by goal proximity, or forward cost h(n) • A* Search orders by the sum: f(n) = g(n) + h(n) g = 0 h=6 8 S g = 1 h=5 e h=1 a 1 1 3 2 g = 9 h=1 g = 2 h=6 S a d G g = 4 h=2 b d e h=6 h=5 1 h=2 h=0 1 g = 3 h=7 g = 6 h=0 g = 10 h=2 c b c G d h=7 h=6 g = 12 h=0 G Example: TegGrenager

  37. When should A* terminate? • Should we stop when we enqueue a goal? • No: only stop when we dequeue a goal h = 2 g h + A 2 2 S 0 3 3 S G S->A 2 2 4 h = 3 h = 0 S->B 2 1 3 2 B 3 S->B->G 5 0 5 h = 1 S->A->G 4 0 4

  38. Is A* Optimal? h = 6 • What went wrong? • Actual bad goal cost < estimated good goal cost • We need estimates to be less than actual costs! A 1 3 g h + S 0 7 7 S h = 7 G h = 0 S->A 1 6 7 S->G 5 0 5 5

  39. Admissible Heuristics

  40. Idea: Admissibility Inadmissible (pessimistic) heuristics break optimality by trapping good plans on the fringe Admissible (optimistic) heuristics slow down bad plans but never outweigh true costs

  41. Admissible Heuristics • A heuristic h is admissible(optimistic) if: where is the true cost to a nearest goal • Examples: • Coming up with admissible heuristics is most of what’s involved in using A* in practice. 0.0 11.5 15

  42. Optimality of A* Tree Search

  43. Optimality of A* Tree Search Assume: • A is an optimal goal node • B is a suboptimal goal node • h is admissible Claim: • A will exit the fringe before B …

  44. Optimality of A* Tree Search: Blocking Proof: • Imagine B is on the fringe • Some ancestor n of A is on the fringe, too (maybe A!) • Claim: n will be expanded before B • f(n) is less or equal to f(A) … Definition of f-cost Admissibility of h h = 0 at a goal

  45. Optimality of A* Tree Search: Blocking Proof: • Imagine B is on the fringe • Some ancestor n of A is on the fringe, too (maybe A!) • Claim: n will be expanded before B • f(n) is less or equal to f(A) • f(A) is less than f(B) … B is suboptimal h = 0 at a goal

  46. Optimality of A* Tree Search: Blocking Proof: • Imagine B is on the fringe • Some ancestor n of A is on the fringe, too (maybe A!) • Claim: n will be expanded before B • f(n) is less or equal to f(A) • f(A) is less than f(B) • n expands before B • All ancestors of A expand before B • A expands before B • A* search is optimal …

  47. Properties of A* Uniform-Cost A* b b … …

  48. UCS vs A* Contours • Uniform-cost expands equally in all “directions” • A* expands mainly toward the goal, but does hedge its bets to ensure optimality Start Goal Goal Start [Demo: contours UCS / greedy / A* empty (L3D1)] [Demo: contours A* pacman small maze (L3D5)]

  49. Video of Demo Contours (Empty) -- UCS

  50. Video of Demo Contours (Empty) -- Greedy

More Related