1 / 71

CS 4100: Artificial Intelligence

CS 4100: Artificial Intelligence. Informed Search. Instructor: Jan-Willem van de Meent [Adapted from slides by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley ( ai.berkeley.edu ).]. Announcements. Homework 1: Search (lead TA: Iris)

jjo
Download Presentation

CS 4100: Artificial Intelligence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 4100: Artificial Intelligence Informed Search Instructor: Jan-Willem van de Meent [Adapted from slides by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley (ai.berkeley.edu).]

  2. Announcements • Homework 1: Search (lead TA: Iris) • Due Mon 16 Sep at 11:59pm (deadline extended) • Project 1: Search (lead TA: Iris) • Due Mon 23 Sep at 11:59pm • Longer than most – start early! • Homework 2: Constraint Satisfaction Problems (lead TA: Eli) • Due Mon 23 Sep at 11:59pm • Office Hours • Iris: Mon 10.00am-noon, RI 237 • JW: Tue 1.40pm-2.40pm, DG 111 • Eli: Wed 3.00pm-5pm, RY 143 • Zhaoqing: Thu 9.00am-11.00am, HS 202

  3. Today • Informed Search • Heuristics • Greedy Search • A* Search • Graph Search

  4. Recap: Search

  5. Recap: Search • Search problem: • States (configurations of the world) • Actions and costs • Successor function (world dynamics) • Start state and goal test • Search tree: • Nodes represent plans for reaching states • Plans have costs (sum of action costs) • Search algorithm: • Systematically builds a search tree • Chooses an ordering of the fringe (unexplored nodes) • Complete: finds solution if it exists • Optimal: finds least-cost plan

  6. Example: Pancake Problem Cost: Number of pancakes flipped

  7. Example: Pancake Problem

  8. Example: Pancake Problem

  9. Example: Pancake Problem State space graph with costs as weights 4 2 3 2 3 4 3 4 2 3 2 2 4 3

  10. General Tree Search Path to reach goal: Flip four, flip three Total cost: 7 Action: flip top twoCost: 2 Action: flip all fourCost: 4

  11. The One Queue • All these search algorithms are the same except for fringe strategies • Conceptually, all fringes are priority queues (i.e. collections of nodes with attached priorities) • Practically, for DFS and BFS, you can avoid the log(n) overhead from an actual priority queue, by using stacks and queues • Can even code one implementation that takes a variable queuing object

  12. Uninformed Search

  13. Uniform Cost Search • Strategy: expand lowest path cost • The good: UCS is complete and optimal! • The bad: • Explores options in every “direction” • No information about goal location c  1 … c  2 c  3 Start Goal

  14. UCS in Empty Space

  15. UCS Contours for a Small Maze

  16. Informed Search

  17. Search Heuristics • A heuristic is: • A function that estimates how close a state is to a goal • Designed for a particular search problem • Examples: Manhattan distance, Euclidean distance 10 5 11.2

  18. Example: Heuristic for Travel in Romania h(x)

  19. Example: Heuristic for Pancake Flipping Heuristic: the number of pancakes that is still out of place 2 h(x) 4 3 3 2 0 3 4 3 3 3 2 3

  20. Example: Heuristic for Pancake Flipping New Heuristic: the index of the largest pancake that is still out of place 2 3 h1(x) 4 4 h2(x) 4 3 3 3 3 3 4 2 2 0 0 3 0 3 3 4 4 4 3 3 4 3 3 3 4 3 3 2 2 4 2 3 3 3

  21. Greedy Search

  22. Strategy: Pick node with smallest h(x) h(x)

  23. Greedy Search • Expand the node that seems closest • What can go wrong?

  24. Greedy Search b • Strategy: expand the node that you think is closest to a goal state • Heuristic: estimate of distance to nearest goal for each state • A common case: • Best-first takes you straight to the goal(but finds suboptimal path) • Worst-case: like a badly-guided DFS … b …

  25. Greedy Search in Empty Space

  26. Greedy Search in a Small Maze

  27. A* Search

  28. A* Search UCS(slow and steady) Greedy Search (fast but unreliable) A* Search (best of both worlds)

  29. Combining UCS and Greedy Search • Uniform-cost orders by path cost, or backward cost g(n) • Greedy orders by goal proximity, or forward heuristic h(n) • A* Search orders by the sum: f(n) = g(n) + h(n) g = 0 h=6 8 S g = 1 h=5 e h=1 a 1 1 3 2 g = 9 h=1 g = 2 h=6 S a d G g = 4 h=2 b d e h=6 h=5 1 h=2 h=0 1 g = 3 h=7 g = 6 h=0 g = 10 h=2 c b c G d h=7 h=6 g = 12 h=0 G Example:TegGrenager

  30. When should A* terminate? • Should we stop when we enqueue a goal? • No: only stop when we dequeue a goal h = 2 A 2 2 S G h = 3 h = 0 2 B 3 h = 1

  31. Is A* Optimal? h = 6 • What went wrong? • Actual cost < heuristic cost • We need estimates to be less than actual costs! A 1 3 S h = 7 G h = 0 5

  32. Admissible Heuristics

  33. Idea: Admissibility Inadmissible (pessimistic) heuristics break optimality by trapping good plans on the fringe Admissible (optimistic) heuristics slow down bad plans but never outweigh true costs

  34. Admissible Heuristics • A heuristic h(n) is admissible(optimistic) if: where h*(n) is the true cost to a nearest goal • Examples: • Coming up with admissible heuristics is most of what’s involved in using A* in practice. 15 4

  35. Optimality of A* Tree Search

  36. Optimality of A* Tree Search Assume: • A is an optimal goal node • B is a suboptimal goal node • h is admissible Claim: • A will exit the fringe before B …

  37. Optimality of A* Tree Search: Blocking Proof: • Imagine B is on the fringe • Some ancestor n of A is on the fringe, too (maybe A itself!) • Claim:n will be expanded before B • f(n) is less or equal to f(A) … Definition of f-cost Admissibility of h h = 0 at a goal

  38. Optimality of A* Tree Search: Blocking Proof: • Imagine B is on the fringe • Some ancestor n of A is on the fringe, too (maybe A itself!) • Claim:n will be expanded before B • f(n) is less or equal to f(A) • f(A) is less than f(B) … B is suboptimal h=0 at a goal

  39. Optimality of A* Tree Search: Blocking Proof: • Imagine B is on the fringe • Some ancestor n of A is on the fringe, too (maybe A itself!) • Claim:n will be expanded before B • f(n) is less or equal to f(A) • f(A) is less than f(B) • n expands before B • All ancestors of A expand before B • A expands before B • A* search is optimal …

  40. Properties of A*

  41. Properties of A* Uniform-Cost A* b b … …

  42. UCS vs A* Contours • Uniform-cost expands equally in all “directions” • A* expands mainly toward the goal, but does hedge its bets to ensure optimality Start Goal Goal Start

  43. A* Search in Empty Space

  44. A* Search in Small Maze

  45. Comparison Greedy Uniform Cost A*

  46. A* Applications • Video games • Pathing / routing problems • Resource planning problems • Robot motion planning • Language analysis • Machine translation • Speech recognition • …

  47. Pacman (Tiny Maze) – UCS / A*

  48. Quiz: Shallow/Deep Water – Guess the Algorithm

  49. Creating Heuristics

  50. Creating Admissible Heuristics • Most of the work in solving hard search problems optimally is in coming up with admissible heuristics • Often, admissible heuristics are solutions to relaxed problems, where new actions are available • Inadmissible heuristics are often useful too 366 15

More Related