1 / 43

Informed Search Algorithms

Informed Search Algorithms. Chapter 4 Feb 2 2007. Belief States (Chap 3). Graph Search (Chap 3). Graph Search: Analysis. Time and Space complexity: proportional to state space (< O(b d )) Optimality: may not be optimal if algorithm throws away a cheaper path to a node that is already closed

danika
Download Presentation

Informed Search Algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Informed Search Algorithms Chapter 4Feb 2 2007

  2. Belief States (Chap 3)

  3. Graph Search (Chap 3)

  4. Graph Search: Analysis • Time and Space complexity: proportional to state space (< O(bd)) • Optimality: may not be optimal if algorithm throws away a cheaper path to a node that is already closed • Uniform-cost and breadth-first graph search are optimal when step cost is constant

  5. Chap 4 – Informed Search • Best-first search • Greedy best-first search • Recursive best-first search • A* search • Heuristics • Local search algorithms • Hill-climbing search • Simulated annealing search • Local beam search • Genetic algorithms sections 4.1, 4.2today’s focus

  6. Review: Tree Search • A search strategy is defined by picking the order of node expansion

  7. Romania with Step Costs in KM additional information that can inform the expansion strategy

  8. Best-First Search • Idea: use an evaluation functionf(n) for each node • f(n) is an estimate of "desirability" • Expand most desirable unexpanded node • Implementation: Order the nodes in fringe in decreasing order of desirability (some form of priority queue) • Special cases: • greedy best-first search • A* search

  9. Greedy Best-First Search • Evaluation function f(n) = h(n) (heuristic)= estimate of cost from n to goal • example: hSLD(n) = straight-line distance from n to Bucharest • Greedy best-first search expands the node that appears to be closest to goal

  10. Greedy Best-First Search

  11. Greedy Best-First Search

  12. Greedy Best-First Search

  13. Greedy Best-First Search

  14. Greedy Best-First Search • Complete? No – can get stuck in loops, e.g., Iasi  Neamt  Iasi  Neamt (when trying to get from Iasi to Fagaras) • Time?O(bm), but a good heuristic can give dramatic improvement • Space?O(bm), keeps all nodes in memory • Optimal? No

  15. A* Search • Idea: avoid expanding paths that are already expensive • Evaluation function f(n) = g(n) + h(n) • g(n) = cost so far to reach n • h(n) = estimated cost from n to goal • f(n) = estimated total cost of path through n to goal

  16. A* Search Example

  17. A* Search Example

  18. A* Search Example

  19. A* Search Example

  20. A* Search Example

  21. A* Search Example

  22. A* Exercise How will A* get from Iasi to Fagaras?

  23. Admissible Heuristics • A heuristic h(n) is admissible if for every node n, h(n) ≤ h*(n), where h*(n) is the true cost to reach the goal state from n. • An admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimistic • Example: hSLD(n) (never overestimates the actual road distance) • Theorem: If h(n) is admissible, A* using TREE-SEARCH is optimal

  24. Admissible Heuristics Admissible Heuristics for the 8-puzzle: h1(n) = number of misplaced tiles h1(S) = 8 h2(n) = total Manhattan distance (sum the walking distance from each square to its desired location) h2(S) = 3+1+2+2+2+3+3+2 = 18

  25. Optimality of A*(tree) • Suppose some suboptimal goal G2has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G. f(G2) = g(G2 + h(G2) = g(G2) > C*, since G2 is a goal on a non-optimal path (C* is the optimal cost) f(n) = g(n) + h(n) <= C*, since h is admissible f(n) <= C* < f(G2), so G2 will never be expanded A* will not expand goals on sub-optimal paths

  26. Consistent Heuristics • A heuristic is consistent if for every node n, every successor n' of n generated by any action a, h(n) ≤ c(n,a,n') + h(n') • If h is consistent, we have f(n') = g(n') + h(n') = g(n) + c(n,a,n') + h(n') ≥ g(n) + h(n) = f(n) • i.e., f(n) is non-decreasing along any path. • Theorem: If h(n) is consistent, A* using GRAPH-SEARCH is optimal • consistency is also called monotonicity

  27. Optimality of A*(graph) • A* expands nodes in order of increasing f value • Gradually adds "f-contours" of nodes • Contour i has all nodes with f=fi, where fi < fi+1

  28. Properties of A* • Complete? Yes (unless there are infinitely many nodes with f ≤ f(G) ) • Time? Exponential • Space? Exponential - keeps all nodes in memory • Optimal? Yes

  29. Properties of A* • A* is also optimally efficient • no other optimal algorithm is guaranteed to expand few nodes, if they did they might miss the optimal solution • Good news: complete, optimal and optimally efficient • Bad news: time complexity is bad, space complexity is worse

  30. Dominance • If h2(n) ≥ h1(n) for all n (both admissible),then h2dominatesh1 h2is better for search • Typical search costs (average number of nodes expanded)for 8-puzzle: d=12 IDS = 3,644,035 nodes A*(h1) = 227 nodes A*(h2) = 73 nodes d=24 IDS = too many nodes A*(h1) = 39,135 nodes A*(h2) = 1,641 nodes

  31. Comparing 8-puzzle Heuristics h2 dominates h1 h2 is better heuristic

  32. Relaxed Problems • A problem with fewer restrictions on the actions is called a relaxed problem • The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem • Reason: the optimal solution to the original problem is, by definition, also a solution to the relaxed problem and must be at least as expensive as the optimal solution to the relaxed problem.

  33. Relaxed Problems • The hSLD(n) function for the Romania travel problem is derived from a relaxed problem: • original problem: drive following roads • relaxed problem: fly in straight lines

  34. Relaxed Problems • 8-puzzle relaxed problem 1:Any tile can be moved anywhere.This derives h1(n) = number of misplaced tiles • 8-puzzle relaxed problem 2:Any tile can move to any adjacent square.This derives h2(n) = total Manhattan distance

  35. Exercise XXXXXXXXXXXXXXXXXXXX..........XXXGXXXX..XX.XXXXXXXX.XXXXXX.....XXXXXX.XXXXXX.XX..XXXXXX.XXXXXX.XX.........XXX....XXXXXXXXXXXXXXXXX...........XXXXXXXXXXXXXXXXX..SXXXXXXXXXXXX....XXX Design a heuristic for the maze problem

  36. Overcoming Space Limitations • A* may run into space limitations • Memory-bounded heuristic search attempts to avoid space complexity • IDA* = iterative deepening A* • RBFS = recursive best-first search • MA*, SMA* = memory-bounded A*

  37. IDA* • A* and best-first keep all nodes in memory • IDA* (iterative deeping A*) is similar to standard iterative deeping, expect the cutoff used is the f-cost (g+h), rather than the depth. • Cutoff value is set to smallest f-cost of any node that exceeded cutoff in previous iteration

  38. RBFS • Recursive best-first search (RBFS) tries to mimic best-first in linear space • Keeps track of the f-value of the best alternative path from any ancestor of the current node • If current node exceed that limit, it unwinds back to the alternate path • It will often have to re-expand discarded subtrees

  39. RBFS f-value of best alternate path

  40. RBFS

  41. RBFS

  42. IDA*, RBFS: too little memory • IDA* only remembers current f-cost limit • RBFS cannot use more memory, even if available • Result: Neither can remember repeated states and may explore same state many times

  43. Memory-bounded A* • MA* and SMA* (simplified MA*) work like A* until memory is full. • SMA*: if memory is full, drop the worst leaf node and push its f-value back to its parent • Subtrees are regenerated only when all other paths have been shown to be worse than the forgotten path • Trashing behavior can result if memory is small compared to size of candidate subpaths

More Related