1 / 69

Problem Solving by Search by Jin Hyung Kim Computer Science Department KAIST

Problem Solving by Search by Jin Hyung Kim Computer Science Department KAIST. Example of Representation. Euler Path. Graph Theory. Graph consists of A set of nodes : may be infinite A set of arcs(links) Directed graph, underlying graph, tree Notations

chun
Download Presentation

Problem Solving by Search by Jin Hyung Kim Computer Science Department KAIST

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Problem Solving by Searchby Jin Hyung KimComputer Science DepartmentKAIST

  2. Example of Representation • Euler Path KAIST CS570 lecture note

  3. Graph Theory • Graph consists of • A set of nodes : may be infinite • A set of arcs(links) • Directed graph, underlying graph, tree • Notations • node, start node(root), leaf (tip node), root, path, ancestor, descendant, child(children, son), parent(father), cycle, DAG, connected, locally finite graph, node expansion KAIST CS570 lecture note

  4. State Space Representation • Basic Components • set of states {s} • set of operators { o : s -> s } • control strategy { c: sn -> o } • State space graph • State -> node • operator -> arc • Four tuple representation • [N, A, S, GD], solution path KAIST CS570 lecture note

  5. Examples of SSR • TIC_TAC_TOE • n2-1 Puzzle • Traveling salesperson problem (TSP) KAIST CS570 lecture note

  6. Search Strategies • A strategy is defined by picking the order of node expansion • Search Directions • Forward searching (from start to goal) • Backward searching (from goal to start) • Bidirectional • Irrevocable vs. revocable • Irrevocable strategy : Hill-Climbing • Most popular in Human problem solving • No shift of attention to suspended alternatives • End up with local-maxima • Commutative assumption • Applying an inappropriate operators may delay, but never prevent the eventual discovery of solutions. • Revocable strategy : Tentative control • An alternative chosen, others reserve KAIST CS570 lecture note

  7. Evaluation of Search Strategies • Completeness • Does it always find a solution if one exists ? • Time Complexity • Number of nodes generated/expanded • Space complexity • Maximum number of nodes in memory • Optimality • Does it always find a least-cost solution ? • Algorithm is admissible if it terminate with optimal solution • Time and Space complexity measured by • b – maximum branching factors of the search tree • d – depth of least-cost solution • m - maximum depth of the state space KAIST CS570 lecture note

  8. Implementing Search Strategies • Uninformed search • Search does not depend on the nature of solution • Systematic Search Method • Breadth-First Search • Depth-First Search (backtracking) • Depth-limited Search • Uniform Cost Search • Iterative deepening Search • Informed or Heuristic Search • Best-first Search • Greedy search (h only) • A* search (g + h) • Iterative A* search KAIST CS570 lecture note

  9. start put s in OPEN Fail OPEN empty ? Select & Remove the a node of OPEN (call it n) Success any succesor = goal ? X-First Search Algorithm yes Expand n. Put its successors to OPEN yes

  10. Comparison of BFS and DFS • Selection strategy from OPEN • BFS by FIFO – Queue • DFS by LIFO - stack • BFS always terminate if goal exist cf. DFS on locally finite infinite tree • Guarantee shortest path to goal - BFS • Space requirement • BFS - Exponential • DFS - Linear, keep children of a single node • Which is better ? BFS or DFS ? KAIST CS570 lecture note

  11. Depth-limited Search • = depth-first search with depth limit • Nodes at depth have no successors KAIST CS570 lecture note

  12. Uniform Cost Search • A Generalized version of Breadth-First Search • C(ni, nj) = cost of going from ni to nj • g(n) =(tentative minimal) cost of a path from s to n. • Guarantee to find the minimum cost path • Dijkstra Algorithm KAIST CS570 lecture note

  13. start put s in OPEN, set g(s) = 0 Fail OPEN empty ? Remove the node of OPEN whose g(.) value is smallest and put it in CLOSE (call it n) Success Expand n. calculate g(.) of successor Put successors to OPEN pointers back to n Uniform Cost Search Algorithm yes yes n = goal ?

  14. Iterative Deepening Search • Compromise of BFS and DFS • Iterative Deepening Search = depth first search with depth-limit increment • Save on Storage, guarantee shortest path • Additional node expansion is negligible • Can you apply this idea to uniform cost search ? • proc Iterative_Deeping_Search(Root) • begin • Success := 0; • for (depth_bound := 1; depth_bound++; Success == 1) • { depth_first_search(Root, depth_bound); • if goal found, Success := 1; • } • end KAIST CS570 lecture note

  15. Iterative Deeping (l=0) KAIST CS570 lecture note

  16. Iterative Deeping (l=1) KAIST CS570 lecture note

  17. Iterative Deeping (l=2) KAIST CS570 lecture note

  18. Iterative Deeping (l=3) KAIST CS570 lecture note

  19. Properties of IDS • Complete ?? • Time Complexity • db1 + (d-1)b2 + … + bd = O(bd) • Bottom level generated once, top-level generated dth • Space Complexity : O(bd) • Optimal ?? Yes, if step cost = 1 • Can be modified to explore uniform cost tree ? • Numerical comparison of speed ( # of node expanded) • b=10 and d=5, solution at far right • N(IDS) = 50 + 400 + 3,000 + 20,000 + 100,000 = 123,450 • N(BFS) = 10 + 100 + 1000 + 10000 + 100000 + 999,990 = 1,111,100 KAIST CS570 lecture note

  20. Repeated States • Failure to detect repeated states can turn a linear problem into an exponential one ! • Search on Graph KAIST CS570 lecture note

  21. Summary of Algorithms KAIST CS570 lecture note

  22. Informed Search

  23. Start node Goal node 3 1 2 3 2 1 8 8 4 4 6 5 7 6 5 7 • # of Misplaced tiles • Sum of Manhattan distance 2 3 8 3 2 2 3 1 1 4 8 4 8 1 4 7 7 6 5 5 7 6 5 6 a c b 8-Puzzel Heuristics • Which is the best move among a, b, c ? KAIST CS570 lecture note

  24. Road Map Problem • To go Bucharest, which city do you choose to visit next from Arad? Zerind, Siblu, Timisoara? • Your rationale ? KAIST CS570 lecture note

  25. Best-First Search • Idea : use of evaluation function for each node • Estimate of “desirability” • We use notation f( ) • Special cases depending on f() • Greedy search • Uniform cost search • A* search algorithm KAIST CS570 lecture note

  26. start put s in OPEN, compute f(s) Fail OPEN empty ? Remove the node of OPEN whose f(.) value is smallest and put it in CLOSE (call it n) Success Expand n. calculate f(.) of successor Put successors to OPEN pointers back to n Best First Search Algorithm( for tree search) yes yes n = goal ?

  27. Generic form of Best-First Search • Best-First Algorithm with f(n) = g(n) + h(n) where g(n) : cost of n from start to node n h(n) : heuristic estimate of the cost from n to a goal h(G) = 0, h(n) >= 0 • Also called A algorithm • Uniform Cost algorithm • When h(n) = 0 always • Greedy Search Algorithm • When g(n) = 0 always • Expands the node that appears to be close to goal • Complete ?? • Local optimal state • Greedy may not lead to optimality • Algorithm A* if h(n) <= h*(n) KAIST CS570 lecture note

  28. Examples of Admissible heuristics • h(n) <= h*(n) for all n • Air-distance never overestimate the actual road distance • 8-tile problem • Number of misplaced tiles • Sum of Manhattan distance • TSP • Length of spanning tree Current node Goal node 6 3 1 3 2 1 8 8 4 4 2 5 7 6 5 7 Number of misplaced tiles : 4 Sum of Manhattan distance : 1 + 2 + 0 + 0 + 0 + 2 + 0 + 1 KAIST CS570 lecture note

  29. Algorithm A* Start node • f(n) = g(n) + h(n), when h(n) <= h*(n) for all n • Find minimum f(n) toexpand next • Role of h(n) • Direct to goal • Role of g(n) • Guard from roaming due to not perfect heuristics g(n) Current Node n h(n) Goal KAIST CS570 lecture note

  30. A* Search Example Romania with cost in km KAIST CS570 lecture note

  31. KAIST CS570 lecture note

  32. KAIST CS570 lecture note

  33. KAIST CS570 lecture note

  34. KAIST CS570 lecture note

  35. KAIST CS570 lecture note

  36. Nodes not expanded (node pruning) KAIST CS570 lecture note

  37. Algorithm A* is admissible • Suppose some suboptimal goal G2 has been generated and is in the OPEN. Let n be an unexpended node on a shortest path to an oprimal goal G1 f(G2) = g(G2) since h(G2) = 0 > g(G1) sinee G2 is suboptimal >= f(n) since h is admissible Since f(G2) > f(n), A* will bever select G2 for expansion start n G2 G1 KAIST CS570 lecture note

  38. A* expands nodes in the order of increasing f value • Gradually adds “f-contours” of nodes (cf. breadth-first adds layers) • Contour i has all nodes with f = fi, where fi < fi+1 • f(n) = g(n) + h(n) <= C* will beexpanded eventually • A* terminate even in locally finite graph : completeness KAIST CS570 lecture note

  39. Monotonicity (consistency) • A heuristic function is monotone if for all states ni and nj = suc(ni) h(ni) - h(nj) cost(ni,nj) and h(goal) = 0 • Monotone heuristic is admissible KAIST CS570 lecture note

  40. Uniform Cost Search’s f-contours Start Goal KAIST CS570 lecture note

  41. A*’s f-contours Start Goal KAIST CS570 lecture note

  42. Greedy’s f-contours Start Goal KAIST CS570 lecture note

  43. More Informedness (Dominate) • For two admissible heuristic h1 and h2, h2 is more informed than h1 if • h1(n) h2(n) for all n • for 8-tile problem • h1 : # of misplaced tile • h2 : sum of Manhattan distance • Combining several admissible heuristics • h(n) = max{ h1(n), …, hn(n)} h1(n) h2(n) h*(n) 0 KAIST CS570 lecture note

  44. Semi-admissible heuristics andRisky heuristics • If { h(n)h*(n)(1 + e) }, C(n) (1+e) C*(n) • Small cost sacrifice save a lot in search computation • Semi-admissible heuristics saves a lot in difficult problems • In case when costs of solution paths are quite similar • e-admissible • Use of non-admissible heuristics with risk • Utilize heuristic functions which are admissible in the most of cases • Statistically obtained heuristics KAIST CS570 lecture note PEARL, J., AND KIM, J. H. Studies in semi-admissible heuristics. IEEE Trans. PAMI-4, 4 (1982), 392-399

  45. Dynamic Use of Heuristics • f(n) = g(n) + h(n) + e[1-d(n)/N] h(n) • d(n) : depth of node n • N : (expected) depth of goal • At shallow level : depth first excursion • At deep level : assumes admissibility • Modify Heuristics during search • Utilize information obtained in the of search process to modify heuristics to use in the later part of search KAIST CS570 lecture note

  46. Inventing Admissible Heuristics :Relaxed Problems • An admissible heuristics is exact solution of relaxed problem • 8-puzzle • Tile can jump – number of misplaced tiles • Tile can move adjacent cell even if that is occupied - Manhattan distance heuristic • Automatic heuristic generator ABSOLVER (Prieditis, 1993) • Traveling salesperson Problem • Cost of Minimum spanning tree < Cost of TSP tour • Minimum spanning tree can be computed O(n2) KAIST CS570 lecture note

  47. Inventing Admissible Heuristics :SubProblems • Solution of subproblems • Take Max of heuristics of sub-problems in the pattern database • 1/ 1000 of nodes are generated in 15 puzzle compared with Manhattan heuristic • Disjoint sub-problems • 1/ 10,000 in 24 puzzle compared with Manhattan KAIST CS570 lecture note

  48. Iterative Deeping A* • Iterative Deeping version of A* • use threshold as depth bound • To find solution under the threshold of f(.) • increase threshold as minimum of f(.) of previous cycle • Still admissible • same order of node expansion • Storage Efficient – practical • but suffers for the real-valued f(.) • large number of iterations KAIST CS570 lecture note

  49. start put s in OPEN, compute f(s) OPEN empty ? Remove the node of OPEN whose f(.) value is smallest and put it in CLOSE (call it n) Success Iterative Deepening A* Search Algorithm ( for tree search) set threshold as h(s) yes threshold = min( f(.) , threshold ) yes n = goal ? • Expand n. calculate f(.) of successor • if f(suc) < threshold then • Put successors to OPEN if • pointers back to n

  50. Memory-bounded heuristic Search • Recursive best-first search • A variation of Depth-first search • Keep track of f-value of the best alternative path • Unwind if f-value of all children exceed its best alternative • When unwind, store f-value of best child as its f-value • When needed, the parent regenerate its children again. • Memory-bounded A* • When OPEN is full, delete worst node from OPEN storing f-value to its parent. • The deleted node is regenerated when all other candidates look worse than the node. KAIST CS570 lecture note

More Related