1 / 81

Artificial Intelligence

This article discusses the concept of search problem and its importance in problem-solving. It covers topics such as search space, graph theory, tree representation, algorithm types, complexity, and key concepts in search. Examples of different problem types and their state space representations are provided.

lissa
Download Presentation

Artificial Intelligence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Artificial Intelligence Search Problem

  2. Search Problem Search is a problem-solving technique to explores successive stages in problem-solving process.

  3. Search Space • We need to define a space to search in to find a problem solution • To successfully design and implement search algorithm, we must be able to analyze and predict its behavior.

  4. State Space Search One tool to analyze the search space is to represent it as space graph, so by use graph theory we analyze the problem and solution of it.

  5. Graph Theory A graph consists of a set of nodes and a set of arcs or links connecting pairs of nodes. River2 Island1 Island2 River1

  6. Graph structure • Nodes = {a, b, c, d, e} • Arcs = {(a,b), (a,d), (b,c),….} d b c e a

  7. Tree • A tree is a graph in which two nodes have at most one path between them. • The tree has a root. a b c d e f g h i j

  8. Space representation In the space representation of a problem, the nodes of a graph correspond to partial problem solution states and arcs correspond to steps in a problem-solving process

  9. Example • Let the game of Tic-Tac-toe

  10. A simple example: traveling on a graph 2 C 9 B 2 3 goal state F A start state E D 4 3 4

  11. Search tree state = A, cost = 0 state = B, cost = 3 state = D, cost = 3 state = C, cost = 5 state = F, cost = 12 goal state! state = A, cost = 7 search tree nodes and states are not the same thing!

  12. Full search tree state = A, cost = 0 state = B, cost = 3 state = D, cost = 3 state = C, cost = 5 state = F, cost = 12 state = E, cost = 7 goal state! state = A, cost = 7 state = F, cost = 11 goal state! state = B, cost = 10 state = D, cost = 10 . . . . . .

  13. Problem types • Deterministic, fully observable  single-state problem • Solution is a sequence of states • Non-observable  sensorless problem • Problem-solving may have no idea where it is; solution is a sequence • Nondeterministic and/or partially observable Unknown state space

  14. Algorithm types • There are two kinds of search algorithm • Complete • guaranteed to find solution or prove there is none • Incomplete • may not find a solution even when it exists • often more efficient (or there would be no point)

  15. Comparing Searching Algorithms: Will it find a solution? the best one? Def. : A search algorithm is complete if whenever there is at least one solution, the algorithm is guaranteed to find it within a finite amount of time. Def.: A search algorithm is optimal if when it finds a solution, it is the best one

  16. Comparing Searching Algorithms: Complexity Branching factorbof a node is the number of arcs going out of the node • Def.: The time complexityof a search algorithm is • the worst- caseamount of time it will take to run, • expressed in terms of • maximum path length m • maximum branching factor b. Def.: The space complexity of a search algorithm is the worst-case amount of memory that the algorithm will use (i.e., the maximum number of nodes on the frontier), also expressed in terms of mandb.

  17. Example: the 8-puzzle. • Given: a board situation for the 8-puzzle: 1 3 8 2 7 5 4 6 • Problem: find a sequence of moves that transform this board situation in a desired goal situation: 1 2 3 8 4 7 6 5

  18. State Space representation In the space representation of a problem, the nodes of a graph correspond to partial problem solution states and arcs correspond to steps (action) in a problem-solving process

  19. Key concepts in search • Set of states that we can be in • Including an initial state… • … and goal states (equivalently, a goal test) • For every state, a set of actions that we can take • Each action results in a new state • Given a state, produces all states that can be reached from it • Cost function that determines the cost of each action (or path = sequence of actions) • Solution: path from initial state to a goal state • Optimal solution: solution with minimal cost

  20. 0 ( NewYork ) 2900 1500 250 1200 250 1200 1500 2900 ( NewYork, Boston ) ( NewYork, Miami ) ( NewYork, Dallas ) ( NewYork, Frisco ) 1450 3300 1700 6200 ( NewYork, Boston, Miami ) ( NewYork, Frisco, Miami ) Keep track of accumulated costs in each state if you want to be sure to get the best path.

  21. Example: Route Finding • Initial state • City journey starts in • Operators • Driving from city to city • Goal test • Is current location the destination city? Liverpool Leeds Nottingham Manchester Birmingham London

  22. State space representation (salesman) • State: • the list of cities that are already visited • Ex.: ( NewYork, Boston) • Initial state: • Ex.: ( NewYork ) • Rules: • add 1 city to the list that is not yet a member • add the first city if you already have 5 members • Goal criterion: • first and last city are equal

  23. Example: The 8-puzzle • states?locations of tiles • actions?move blank left, right, up, down • goal?= goal state (given) • path cost? 1 per move

  24. Example: robotic assembly • states?: real-valued coordinates of robot joint angles parts of the object to be assembled • actions?: continuous motions of robot joints • goal test?: complete assembly • path cost?: time to execute

  25. 1. A way to represent board situations • Ex.: 8 List: (( king_black, 8, C), ( knight_black, 7, B), ( pawn_black, 7, G), ( pawn_black, 5, F), ( pawn_white, 2, H), ( king_white, 1, E)) 7 6 5 4 3 2 1 A B C D E F G H Example: Chess • Problem: develop a program that plays chess

  26. search tree ~15 Move 1 Move 2 ~ (15)2 ~ (15)3 Move 3 Chess Need very efficient search techniques to find good paths in such combinatorial trees.

  27. AND-OR-tree? C A B Goal: A on B and B on C AND C C A B A B Goal: A on B Goal: B on C independence of states: Ex.: Blocks world problem. • Initially: C is on A and B is on the table. • Rules: to move any free block to another or to the table • Goal: A is on B and B is on C.

  28. Search in State Spaces • Effects of moving a block (illustration and list-structure iconic model notation)

  29. Avoiding Repeated States • In increasing order of effectiveness in reducing size of state space and with increasing computational costs: 1. Do not return to the state you just came from. 2. Do not create paths with cycles in them. 3. Do not generate any state that was ever created before. • Net effect depends on frequency of “loops” in state space.

  30. Initial states Goal states • Forward reasoning (or Data-driven): from initial states to goal states. Forward versus backward reasoning:

  31. Initial states Goal states • Backward reasoning (or backward chaining / goal-driven): from goal states to initial states. Forward versus backward reasoning:

  32. Data-Driven search • It is called forward chaining • The problem solver begins with the given facts and a set of legal moves or rules for changing state to arrive to the goal.

  33. Goal-Driven Search • Take the goal that we want to solve and see what rules or legal moves could be used to generate this goal. • So we move backward.

  34. Search Implementation • In both types of moving search, we must find the path from start state to a goal. • We use goal-driven search if • The goal is given in the problem • There exist a large number of rules • Problem data are not given

  35. Search Implementation • The data-driven search is used if • All or most data are given • There are a large number of potential goals • It is difficult to form a goal

  36. Sometimes equivalent: 1 3 8 2 7 5 4 6 In this case: even the same rules !! 1 2 3 8 4 7 6 5 Criteria: • Sometimes: no way to start from the goal states • because there are too many (Ex.: chess) • because you can’t (easily) formulate the rules in 2 directions.

  37. General Search Considerations • Given initial state, operators and goal test • Can you give the agent additional information? • Uninformed search strategies • Have no additional information • Informed search strategies • Uses problem specific information • Heuristic measure (Guess how far from goal)

  38. Classical Search Strategies • Breadth-first search • Depth-first search • Bidirectional search • Depth-bounded depth first search • like depth first but set limit on depth of search in tree • Iterative Deepening search • use depth-bounded search but iteratively increase limit

  39. S A D B A D E C B B F E E B F F C A C D E G C F G G G Breadth-first search: • Move downwards, level by level, until goal is reached. It explores the space in a level-by-level fashion.

  40. Breadth-first search • BFS is complete: if a solution exists, one will be found • Expand shallowest unexpanded node Implementation: • fringe is a FIFO queue, i.e., new successors go at end

  41. Breadth-first search • Expand shallowest unexpanded node Implementation: • fringe is a FIFO queue, i.e., new successors go at end

  42. Breadth-first search • Expand shallowest unexpanded node Implementation: • fringe is a FIFO queue, i.e., new successors go at end

  43. Breadth-first search • Expand shallowest unexpanded node Implementation: • fringe is a FIFO queue, i.e., new successors go at end

  44. Analysis of BFS Def. : A search algorithm is complete if whenever there is at least one solution, the algorithm is guaranteed to find it within a finite amount of time. Yes Is BFS complete? • If a solution exists at level l, the path to it will be explored before any other path of length l + 1 • impossible to fall into an infinite cycle • see this in AISpace by loading “Cyclic Graph Examples” or by adding a cycle to “Simple Tree”

  45. Analysis of BFS Def.: A search algorithm is optimal if when it finds a solution, it is the best one Yes Is BFS optimal? • E.g., two goal nodes: red boxes • Any goal at level l (e.g. red box N 7) will be reached before goals at lower levels

  46. Analysis of BFS • Def.: The time complexity of a search algorithm is • the worst-case amount of time it will take to run, • expressed in terms of • maximum path length m • maximum forward branching factor b. • What is BFS’s time complexity, in terms of m and b? O(bm) • Like DFS, in the worst case BFS must examine every node in the tree • E.g., single goal node -> red box

  47. Analysis of BFS • Def.: The space complexity of a search algorithm is the • worst case amount of memory that the algorithm will use • (i.e., the maximal number of nodes on the frontier), expressed in terms of • maximum path length m • maximum forward branching factor b. • What is BFS’s space complexity, in terms of m and b ? • BFS must keep paths to all the nodes al level m O(bm)

  48. Using Breadth-first Search • When is BFS appropriate? • space is not a problem • it's necessary to find the solution with the fewest arcs • When there are some shallow solutions • there may be infinite paths • When is BFS inappropriate? • space is limited • all solutions tend to be located deep in the tree • the branching factor is very large

More Related