1 / 52

Knowledge Search

Knowledge Search. CPTR 314. Approaches to Searching. There are two main approaches to searching a search tree Data-driven search Goal-driven search. Data-Driven search.

rconner
Download Presentation

Knowledge Search

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Knowledge Search CPTR 314

  2. Approaches to Searching • There are two main approaches to searching a search tree • Data-driven search • Goal-driven search

  3. Data-Driven search • Starts from an initial state and uses actions that are allowed to move forward until a goal is reached. This approach is also known as forward chaining. • Useful when the initial data are provided and it is not clear what the goal is.

  4. State space in which data-directed search prunes irrelevant data and their consequents and determines one of a number of possible goals.

  5. Goal-driven search • A search can start at the goal and work back toward a start state, by seeing what moves could have led to the goal state. This is also known as backward chaining • Useful in situations in which the goal can be clearly specified. • Examples • Theorem Proving • Medical Diagnosis

  6. State space in which goal-directed search effectively prunes extraneous search paths.

  7. Generate and Test • The simplest approach to search • Generate each node in the search space and test it to see if it is a goal node. • This is the simplest form of brute-force search (exhaustive search) • It is the manner in which people often solve problems where there is no additional information about how to reach a solution • Must satisfy the following properties • It must be complete • It must be non-redundant • It must be well informed

  8. Depth-First Search • It follows each path to its greatest depth before moving on to the next path • It is an example of brute-force search, or exhaustive search • Uses a method called chronological backtracking to move back up the search tree once a dead end has been found • It closely relates to the natural way in which humans search for things

  9. Depth-First Search

  10. Backtracking search of a hypothetical state space.

  11. Depth-first search of the 8-puzzle with a depth bound of 5.

  12. Breadth-First Search • Also a brute-force Search • Search the tree by levels • Breadth-first search is a far better method to use in situations where the tree may have very deep paths, and particularly where the goal node is in a shallower part of the tree • Unfortunately, it does not perform so well where the branching factor of the tree is extremely high • It is a poor idea in trees where all paths lead to a goal node with similar length paths. Depth-first search is better for this case

  13. Breadth-First Search

  14. Breadth-first search of the 8-puzzle, showing order in which states are visited

  15. Symmetry • Symmetries in a search space can reduce tremendously a search

  16. First three levels of the tic-tac-toe state space reduced by symmetry.

  17. Heuristic Search • An informed guess • Polya definition. “The study of the methods and rules of discovery and invention” • In state space search, heuristics are formalized as rules for choosing those branches in a state space that are most likely to lead to an acceptable problem solution

  18. Heuristic evaluation functions • Are functions that when applied to a node gives a value that represents a good estimate of the distance of the node from the goal. • For two nodes m and n, and heuristic function h, if h(m) < h(n), then it should be the case that m is more likely to be on an optimal path to the goal node than n

  19. Informed and Uninformed Methods • A search method or heuristic is informed if it uses additional information about nodes that have not yet been explored to decide which nodes to examine next • A heuristic h is said to be more informed than another heuristic, j, if h(node) < j(node) for all nodes in the search space.

  20. Choosing a Good Heuristic • Some heuristics are better than others, and the better (more informed) the heuristic is , the fewer nodes it needs to examine to find a solution • Choosing the right heuristic can make a significant difference in our ability to solve a problem

  21. Best-First Search • The simplest way to implement heuristic search is through a procedure called hill climbing. • The best child is selected for further expansion; neither its siblings nor its parent are retained. Search halts when it reaches a state that is better than any of its children

  22. Best-First Search • Because it keeps no history, the algorithm cannot recover from failures of its strategy • A major problem of hill-climbing strategies is their tendency to become stuck at local maxima • Because “better” need not be “best” in an absolute sense, search methods without backtracking or some other recovery mechanism are unable to distinguish between local and global maxima • It is equally applicable to data- and goal-driven searches

  23. Heuristic search of a hypothetical state space.

  24. Heuristic search of a hypothetical state space with open and closed states highlighted.

  25. The “most wins” heuristic applied to the first children in tic-tac-toe.

  26. Heuristically reduced state space for tic-tac-toe.

  27. The 8-puzzle

  28. The 8-Puzzle Heuristics • Heuristic 1 • The simplest heuristic counts the titles out of place in each state when it is compared with the goal • Not very good because it does not take into account the distance the tiles must be moved • Heuristic 2 • A “better” heuristic would sum all the distances by which the tiles are out of place

  29. The 8-Puzzle Heuristics • Both of the previous heuristics can be criticized for failing to acknowledge the difficulty of tile reversal. If two tiles are next to each other and the goal requires their being in opposite locations, it takes more than two moves to put them back in place • Heuristic 3 • A heuristic that takes this into account multiplies a small number (2, for example) times each direct tile reversal.

  30. Three heuristics applied to states in the 8-puzzle.

  31. Fourth Heuristic • Heuristic 4 • Add the sum of the distances the tiles are out of place and 2 times the number of direct reversals • This example illustrate the difficulty of devising good heuristic • Our goal is to use the limited information available in a single state descriptor to make intelligent choices

  32. Heuristic Design • The design of good heuristic is an empirical problem • Judgment and intuition help • Final measure of a heuristic must be its actual performance on problem instances

  33. Heuristic evaluation • If two states have the same or nearly the same heuristic evaluation, it is generally preferable to examine the state that is nearest to the root state of the graph • This state will have a greater probability of being on the shortest path to the goal • The distance from the starting state to its descendants can be measured by maintaining a depth count for each state • This makes our evaluation function, f, the sum of two components • f(n) = g(n) + h(n) • Where g(n) measures the actual length of the path from any state n to the start state and h(n) is the heuristic estimate of the distance from state n to a goal

  34. The heuristic f applied to states in the 8-puzzle.

  35. Games • Two-Person games are more complicated than simple puzzles because of the existence of a “hostile” and essentially unpredictable opponent

  36. Nim Game • Consider a variant of the game Nim, whose state may be exhaustively searched • A number of tokens are placed on a table between the two opponents • At each mode, the player must divide a pile of tokens into two nonempty piles of different sizes • The first player who can no longer make a move loses the game

  37. State space for a variant of nim. Each state partitions the seven tokens into one or more piles.

  38. Minimax • When evaluating game trees, it is usual to assume that the computer is attempting to maximize some score that the opponent is trying to minimize • The Minimax algorithm is used to choose good moves • It assumes that the opponent will play rationally and will always play the move that is best for him or her

  39. Minimax • The opponents in a game are referred as MIN and MAX • MAX represents the player trying to win, or Maximize her advantage • MIN is the opponent who attempts to MINimize MAX’s score • Each level in the search space according to whose move it is at that point in the game, MIN or MAX

  40. MIN, MAX for Nim • Each leaf node is given a value of 1 or 0, depending on whether it is a win for MAX or for MIN • If the parent state is a MAX node, give it the maximum value among its children • If the parent is a MIN node, give it the minimum value of its children • Max should follow a path of 1’s to win the game

  41. Minimax for the game of Nim. Bold lines indicate forced win for MAX. Each node is marked with its derived value (0 or 1) under Minimax.

  42. Minimizing to Fixed Ply Depth • It is seldom possible to expand the state space graph out to the leaf nodes • Instead, the sate space is searched to a predefined number of levels, as determined by available resources of time and memory • This strategy is called an n-ply look ahead, where n is the number of levels explored

  43. Minimax to a hypothetical state space. Leaf states show heuristic values; internal states show backed-up values.

  44. Heuristic measuring conflict applied to states of tic-tac-toe.

  45. Two-ply Minimax applied to the opening move of tic-tac-toe

  46. Two-ply minimax and one of two possible MAX second moves

  47. Two-ply minimax applied to X’s move near the end of the game

  48. Alpha-Beta Pruning • Using alpha-beta pruning, it is possible to remove sections of the game tree that are not worth examining, to make searching for a good move more efficient • Alpha is the value that the human player has to refute, and beta is the value that the computer has to refute.

More Related