Problem Solving: Search Techniques

1 / 91

# Problem Solving: Search Techniques

## Problem Solving: Search Techniques

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
##### Presentation Transcript

1. Problem Solving: Search Techniques SCCS451 Artificial Intelligence Asst. Prof. Dr. SukanyaPongsuparb Dr. SrisupaPalakvangsa Na Ayudhya Dr. BenjarathPupacdi

2. Our problem is to find a path from S to G Occupied Free S G

3. Convert to tree/graph Space Representation Equivalent Tree

4. What we want to do is… • Once you have your graph you need to search for the best path • To ﬁnd a solution when we are not given an algorithm to solve a problem, but only a speciﬁcation of what a solution looks like IDEA: Search for a solution

5. What is “Search”? • Search Algorithm • An algorithm that takes a problem as input • returns a solution to the problem, usually after evaluating a number of possible solutions • An important aspect of search algorithm is goal-based problem solving

6. Where to search? • Search from the set of all possible solutions of a problem • Called “search space”

7. How to search? • How to search • Start at the start state • Consider the eﬀect of taking diﬀerent actions starting from • states that have been encountered in the search so far • Stop when a goal state is encountered • A solution is a path in a state space from a start node to a goal node (there can be many goal nodes) • The cost of a solution is the sum of the arc costs on the solution path

8. What is “goal-based” problem solving? • Goal formation • Based upon the current situation and performance measures • Result is moving into a desirable state (goal state) • Problem formation • Determining actions and states to consider given the goal • Objective • Select the best sequence of actions and states to attain goal

9. Example • Chess Game • Each board configuration can be thought of representing a different state of the game. • A change of state occurs when one of the player moves a piece. • A goal state is any of the possible board configurations corresponding to a checkmate. e2 to e4

10. Example(cont.) • Chess game has more than 10120 possible states. • Winning a game amounts to finding a sequence of states through many possible states that leads to one of goal states, i.e. a check mate state. • An intelligent chess playing program would not play the game by exploring all possible moves. • Potential hypotheses must be considered before a good one is chosen.

11. Example(cont.) • Other examples using search techniques • search to find matching words from a dictionary, sentence constructions, and matching contexts • To perceive an image, program searches must be performed to find model patterns that match input scenes

12. Preliminary Concept • Programs can be characterized as a space consisting of a set of states, and a set of operators that map from one state to other states • Type of States • one or more of initial state – the starting point • a number of intermediate states • one or more goal states

13. Well-defined “goal-based” problem • Initial state • Operator or successor function - for any state x returns s(x), the set of states reachable from x with one action • State space - all states reachable from the initial state by any sequence of actions • Path - sequence through state space • Path cost - function that assigns a cost to a path. Cost of a path is the sum of costs of individual actions along the path • Goal test - test to determine if at goal state • Goal - a sequence of operations that map an initial state to a goal state

14. Example

15. How to tell which algorithm is better? • Completeness • if at least one solution exists, the algorithm is guaranteed to ﬁnd a solution within a ﬁnite amount of time • Time Complexity • the worst-case amount of time that the algorithm takes to run • Space Complexity • the worst-case amount of memory that the algorithm uses • Optimality • If a solution is found, is it guaranteed to be an optimal one? That is, is it the one with minimum cost?

16. How to tell which algorithm is better? (cont.) • good solution - requires the fewest operations or the least cost to map from an initial state to a goal state. • Time and space complexities of an algorithms may be defined in terms of their best, their average, or their worst-case performance in completing some task

17. How to implement our search space?

18. Graph and Tree Representation

19. Graph and Tree Representation • Traditionally a search space is represented as a diagram of a directed graph or a tree. • Each node or vertex in the graph corresponds to a problem state. • Arcs between nodes correspond to transformations or mappings between the states. • A tree is a graph which each node has at most one parent. • The immediate successors of a node are referred to as children, sibling or offspring. • The predecessor nodes are ancestors. • An immediate ancestor to a node is a parent.

20. “Search”… “Search”… and “Search” • Search Procedure is a strategy to select the order in which nodes are generated and a given path selected • The aim of search is not to produce complete physical trees in memory, but rather explore as little of the virtual tree looking for root-goal paths. • Search problems could be classified into two groups according to information used to carry out a given strategy: • Uninformed Search • Informed Search

21. Uninformed Search (or Blind Search) • Orders nodes without using any domain specific information (blindly followed) • This algorithm uses the initial state, the search operators, and a test to find a solution. • A blind search should proceed in systematic way by exploring nodes in some pre-determined order. • Implemented in general with the same implementation for various problems

22. Uninformed Search (or Blind Search) (cont.) • Drawback • does not take into account the specific nature of the problem • most search spaces are extremely large, and an uninformed search (especially of a tree or graph) will take a reasonable amount of time only for small examples. • Example • Breadth-First Search • Depth-First Search • Depth-First Iterative Deepening Search

23. Informed Search • Informed search tries to reduce the amount of search that must be done by making intelligent choices for the nodes that are selected for exploration and expansion. • This implies that there are methods to evaluate the likelihood that given nodes are on solution paths • a heuristic that is specific to the problem is used as a guide • Example • Hill Climbing Method • Best-First Search • A* Search • Branch-and-Bound Search

24. Example of Search Problems • Eight Puzzles • Travelling Salesman Problem

25. Example: 8 Puzzles Initial State Goal State 8 3 5 1 2 3 6 1 8 4 2 4 7 7 6 5

26. State Space Definition of 8-Puzzle • State description: • position of each of the 8 tiles in one of 9 squares (or 3x3 board) • Operators: • blank position moves up, down, left, or right • Goal Test: • current state matches goal configuration illustrated in the previous slide • Path cost: • each move is assigned a cost of 1.

27. 8 Puzzles (cont.) • An optimal or good solution is one that maps an initial arrangement of tiles to the goal configuration with the smallest number of moves. • The search space for the eight puzzle problem is depicted as the tree structure

28. 8 puzzle Search Space

29. 8 Puzzles (cont.) • In the tree structure, • the nodes are depicted as puzzle configurations • the root node represents a randomly chosen starting configuration • its successor nodes correspond to the movements that are possible • A path is a sequence of nodes starting from the root and progressing downward to the goal node

30. Travelling Salesman Problem • Given a list of cities and their pair wise distances, the task is to find a shortest possible tour that visits each city exactly once. • an exponential amount of time is needed to solve e.g. a minimal solution with only 10 cities is 10! (3,628,800 tours).

31. Examples of Uninformed Search • Breadth-First Search • Depth-First Search • Depth-First Iterative Deepening Search

32. Breadth-First Search (BFS) • Breadth-First Search are performed by exploring all nodes at a given depth before proceeding to the next level. • Advantage: Always finding a minimum path length solution when one exists. • Disadvantage: A great many nodes may need to be explored before a solution is found.

33. B C A G S D F E Example: Map Navigation State Space: S = start, G = goal, other nodes = intermediate states, links = legal transitions

34. A B C G S D F E BFS Search Tree S Queue = {S} Select S Goal(S) = true? If not, Expand(S)

35. A B C G S D F E BFS Search Tree S D A Queue = {A, D} Select A Goal(A) = true? If not, Expand(A)

36. A B C G S D F E BFS Search Tree S D A Queue = {D, B, D} Select D Goal(D) = true? If not, expand(D) B D

37. A B C G S D F E BFS Search Tree S D A Queue = {B, D, A, E} Select B etc. A E B D

38. A B C G S D F E BFS Search Tree S D A A E B D B F S E S B C E Level 3 Queue = {C, E, S, E, S, B, B, F}

39. A B C G S D F E BFS Search Tree S D A A E B D B F S E S B C E A G C A B F D C E D F Level 4 Expand queue until G is at front Select G Goal(G) = true

40. BFS

41. Examples of Uninformed Search • Breadth-First Search • Depth-First Search • Depth-First Iterative Deepening Search

42. Depth First Search • Depth-first searches are performed by diving downward into a tree. • It generates a child node from the most recently expand node, then generating that child’s children, and so on until a goal is found, or some cutoff depth point d is reached. • If a goal is not found when a leaf node is reached or at the cutoff point, the program backtracks to the most recently expanded node and generates another of its children. • This process continues until a goal is found or failure occurs

43. Depth First Search (cont.) • The depth-first is preferred over the breadth-first when the search tree is known to have a plentiful number of goals. • The depth cutoff introduces some problems. If it is set too shallow, goals may be missed. If set too deep, extra computation may be performed.

44. A B C G S D F E DFS Search Tree S D A Stack = {A,D}

45. A B C G S D F E DFS Search Tree S D A B D Stack = {B,D,D}

46. A B C G S D F E DFS Search Tree S D A B D Stack = {C,E,D,D} C E

47. A B C G S D F E DFS Search Tree S D A B D Stack = {D,F,D,D} C E D F

48. A B C G S D F E DFS Search Tree S D A B D Stack = {G,D,D} C E D F G

49. DFS

50. Depth-First Search with a depth-limit Suppose we set the depth limit = 3, then we won’t reach the goal node.