1 / 59

CS 2710, ISSP 2160

CS 2710, ISSP 2160. Chapter 3, Part 2 Heuristic Search. Heuristic Search. Take advantage of information about the problem. Best-First-Search. An evaluation function f determines order of nodes on the fringe (there are variations, depending on the search algorithm). Best-First-Search.

brittanyn
Download Presentation

CS 2710, ISSP 2160

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 2710, ISSP 2160 Chapter 3, Part 2 Heuristic Search

  2. Heuristic Search • Take advantage of information about the problem

  3. Best-First-Search • An evaluation function f determines order of nodes on the fringe (there are variations, depending on the search algorithm)

  4. Best-First-Search • In our framework: • treesearch or graphsearch, with nodes ordered on the fringe in increasing order by an evaluation function, f(n).

  5. def treesearch (qfun,fringe): while len(fringe) > 0: cur = fringe[0] fringe = fringe[1:] if goalp(cur): return cur fringe = qfun(makeNodes(cur,successors(cur)),fringe) return [] best-first search: qfun appends the lists together and sorts them in increasing order by f-value [In the more efficient version, a heap is used to maintain the queue in increasing order by f-value]

  6. Heuristic Evaluation Function, h(n) • F may involve “heuristic evaluation function”, h(n)

  7. h(n) • Metric on states. Estimate of shortest distance to some goal. • h : state  estimate of distance to goal • h (goal) = 0 for all goal nodes

  8. Greedy Best-First Search • f (n) = h (n) • Greedy best-first search may switch its strategy mid-search. For example, it may go depth-first for awhile, but then return to the shallow parts of the tree.

  9. Greedy Example • In the map domain, h(n) could be the straight line distance from a city to Bucharest • Greedy search expands the node that currently appears to be closest to the goal

  10. Go from Arad to Bucharest 380 Oradea Neamt 71 151 Iasi 374 253 176 Zerind Sibiu Fagaras 75 99 Arad 140 Vaslui 366 80 211 193 118 Rimnicu Vilcea Timisoara Urziceni Hirsova 97 329 Lugoj Pitesti 111 244 101 85 70 146 Mehadia Bucharest 241 0 75 138 90 Eforie 120 Dobreta Craiova Giurgiu 160

  11. Zerind 374 Sibiu 253 Timisoara 329 Arad 366 Oradea 380 Fagaras 178 Rimniciu 193 Sibiu 253 Bucharest 0 Greedy Example Arad 366

  12. Greedy Search • Complete? • Nope • Optimal? • Nope • Time and Space? • It depends

  13. Best of Both • In an A* search we combine the best parts of Uniform-Cost and Best-First. • [Simple example in lecture] • We want to use the cost so-far to allow optimality and completeness, while at the same time using a heuristic to draw us toward a goal.

  14. A*: f(n) = g(n) + h(n) g(n): actual cost from start to n h(n): estimated distance from n.state to a goal Even if h continuously returns good values for states along a path, if no goal is reached, g will eventually dominate h and force backtracking to more shallow nodes.

  15. Example • Figure 3.24 shows the progress of A* on the Romanian route finding problem. The h-values it uses are in Figure 3.22 • The next slide shows the state space with both the edge costs and the h-values in one diagram; this will help you trace through Figure 3.24

  16. Go from Arad to Bucharest 380 Oradea Neamt 71 151 Iasi 374 253 176 Zerind Sibiu Fagaras 75 99 Arad 140 Vaslui 366 80 211 193 118 Rimnicu Vilcea Timisoara Urziceni Hirsova 97 329 Lugoj Pitesti 111 244 101 85 70 146 Mehadia Bucharest 241 0 75 138 90 Eforie 120 Dobreta Craiova Giurgiu 160

  17. A*: f(n) = g(n) + h(n) • If h(n) does not overestimate the real cost then the treesearch version of A* is optimal. • An h function that does not overestimate is called admissible

  18. A* with an admissible heuristic is optimal • Let: g2 be a suboptimal goal on fringe and gO be an optimal goal, g(gO) = C* • C* < g(g2) (since g2 is suboptimal) • h(g2) = 0 (since g2 is a goal) • So f(g2) = g(g2) and • C* < f(g2)

  19. Proof continued • Let n be a node on the fringe that is on an optimal solution path • Since h is admissible: f(n) = g(n) + h(n) <= C* • For g2 to be the first goal found, it would need to be first on the fringe • But f(n) <= C* < f(g2)

  20. A* is complete • Even if the heuristic is not admissible • (As long as all edgecosts exceed some finite e and that the branching factor, b, is finite. The wrap-up notes mention these details)

  21. A* and Memory • Does A* solve the memory problems with BrFS and Uniform Cost? • A* has same or smaller memory requirement than BrFS or Uniform Cost • How is A* related to BrFS And UC? • BrFS = A* with edgecost(n1,n2) = c, h(n) = 0 (for some positive c) • UC = A* with h(n) = 0 • But it might not be sufficiently better to make A* practically feasible

  22. Note • Placement of goalp test (and return if successful) in algorithm is critical. • Optimality guarantee lost if nodes are tested when they are generated [elaboration and example] • True for uniform-cost search too

  23. Note for A* • Assuming f-costs are nondecreasing along any path: • Can draw contours in the state space • Inside a contour labeled 300 are all nodes with f(n) less than or equal to 300 • A* fans out from start, expanding nodes in bands of increasing f-cost. • h(n) = 0 and edgecosts equal: contours are round • With better heuristics, the bands narrow and stretch toward the goal node

  24. EG Admissible Heuristics The 8-puzzle (a small version of the 15 puzzle). Sample heuristics H1: Number of misplaced tiles H2: Manhattan distance

  25. 8 Puzzle Example • H1(S) = 7 • H2(S) = 2+3+3+2+4+2+0+2 = 18 Which heuristic is better?

  26. Informedness • Let h1 and h2 be admissible heuristics. If h1(n) <= h2(n) for all n, then h2 is more informed than h1 and • Fewer nodes will be expanded, on average, with h2 than with h1 • The larger values the better (without going over)

  27. A* is often not feasible • Still a memory hog • What can we do? • Use an iterative deepening style strategy!

  28. IDA* • Like iterative deepening, but search to f-contours rather than fixed depths. • Each iteration expands all nodes within a particular f-value range.

  29. Def fLimSearch(fringe,fLim): nextF = INFINITY while fringe: cur = fringe[0] fringe = fringe[1:] curF = cur.gval + h(cur) if curF <= fLim: if goalp(cur): return(cur,curF) succNodes = makeNodes(cur,successors(cur)) for s in succNodes: fVal = s.gval + h(s) if fVal > fLim and fVal < nextF: nextF = fVal fringe = succNodes + fringe return ([],nextF)

  30. def IDAstar(start): result = [] startNode = Node(start) fLim = h(startNode) while not result: result, FLim = fLimSearch([startNode],fLim) return result

  31. IDA* • Worst case, space is O(bd) (if edgecosts are all equal and the heuristic is admissible; we aren’t analyzing other cases) • Optimal, if h is admissible • The number of iterations grows as the number of possible f values grow. Let x = average # nodes with the same f-value. The lower x is, the fewer new nodes, on average, are expanded on each iteration. • Practical if x above is not too low – avoids the overhead of maintaining a sorted queue, and realizes the space savings of depth-first search

  32. Beam Search • Cheap, unpredictable search • For problems with many solutions, it may be worthwhile to discard unpromising paths • Greedy best first search that keeps a fixed number of nodes on the fringe

  33. Beam Search def beamSearch(fringe,beamwidth): while len(fringe) > 0: cur = fringe[0] fringe = fringe[1:] if goalp(cur): return cur newnodes = makeNodes(cur, successors(cur)) for s in newnodes: fringe = insertByH(s, fringe) fringe = fringe[:beamwidth] return []

  34. Beam Search • Optimal? Complete? • Hardly! • Space? • O(b) (generates the successors) • Often useful

  35. General Notes before Continuing

  36. Search strategies differ along many dimensions • Basic strategy: depth-first, breadth-first, least-actual-cost (g(n)), best first (h(n)), or a mixture? • Is the algorithm iterative, starting by looking at a small part of the state space and then successively looking at larger parts of it? (e.g., iterative deepening and IDA*)

  37. Search strategies differ along many dimensions • Does it pay attention to cycles? (i.e., our treesearch vs. graphsearch) • Can it backtrack? Or are parts of the search tree/graph irrevocably pruned? (e.g., beam search) • Does it only look ahead toward goal (h), or does it also consider how far it has come so far? (g)

  38. A note on optimality • It might be desirable to be greedy (e.g., greedy best-first vs. A*) • Simon: people are often “satisficers”: often, they stop as soon as they find a satisfactory solution • Consider choosing a line at the grocery store, or finding a parking space

  39. Creating Heuristics

  40. Combining Heuristics • If you have lots of heuristics and none dominates the others and all are admissible… • Use them all! • h(n) = max(h1(n), …, hm(n))

  41. Relaxed Heuristic • Relaxed problem A problem with fewer restrictions on the actions The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem.

  42. Relaxed Problems • Exact solutions to different (relaxed) problems • H1 (# of misplaced tiles) is perfectly accurate if a tile could move to any square • H2 (sum of Manhattan distances) is perfectly accurate if a tile could move 1 square in any direction

  43. Relaxed Problems • If problem is defined formally as a set of constraints, relaxed problems can be generated automatically • Absolver (Prieditis, 1993) • Discovered a better heuristic for 8 puzzle and the first useful heuristic for Rubik’s cube • Next slide: formal definition of a problem that will allow us to relax it in order to automatically generate heuristics • This looks forward to the planning section of the course

  44. Systematic Relaxation • Precondition List • A conjunction of predicates that must hold true before the action can be applied • Add List • A list of predicates that are to be added to the description of the world-state as a result of applying the action • Delete List • A list of predicates that are no longer true once the action is applied and should, therefore, be deleted from the state description • Primitive Predicates • ON(x, y) :tile x is on cell y • CLEAR(y) :cell y is clear of tiles • ADJ(y, z) :cell y is adjacent to cell z

  45. Here is the full definition of s move for the n-puzzle Move(x, y, z): precondition list ON(x, y), CLEAR(z), ADJ(y, z) add list ON(x, z), CLEAR(y) delete list ON(x, y), CLEAR(z)

  46. 15 1 2 3 1 2 3 4 5 6 7 4 5 6 7 8 9 10 11 8 9 10 11 13 14 12 13 14 15 12 (1) Removing CLEAR(z) and ADJ(y, z) gives “# tiles out of place”. Misplaced distance is 1+1=2 moves

  47. 15 1 2 3 1 2 3 4 5 6 7 4 5 6 7 8 9 10 11 8 9 10 11 13 14 12 13 14 15 12 (2) Removing CLEAR(z) gives “Manhattan distance”. Manhattan distance is 6+3=9 moves

  48. Pattern Database Heuristics • The idea behind pattern database heuristics is to store exact solution costs for every possible sub-problem instance.

  49. 14 7 3 3 7 15 12 11 11 13 12 13 14 15 7 13 3 12 7 15 3 11 11 14 12 13 14 15 12 11 3 7 14 7 13 3 11 15 12 13 14 15 Solve part of the problem, ignoring the other tiles

  50. Pattern Databases • optimal solution cost of the subproblem <= optimal solution cost of the full problem. • Run exhaustive search to find optimal solutions for every possible configuration of 3, 7, 11, 12, 13, 14, 15, and store the resulting path costs • Do the same for the other tiles and space (maybe in two subsets) • Do this once before any problem solving is performed. Expensive, but can be worth it, if the search will be applied to many problem instances (deployed)

More Related