1 / 22

Biointelligence Lab School of Computer Sci. & Eng. Seoul National University

Artificial Intelligence Chapter 8 Uninformed Search. Biointelligence Lab School of Computer Sci. & Eng. Seoul National University. Outline. Search Space Graphs Breadth-First Search Depth-First Search Iterative Deepening. Search in State Spaces - Motivating Examples.

espinosak
Download Presentation

Biointelligence Lab School of Computer Sci. & Eng. Seoul National University

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Artificial IntelligenceChapter 8Uninformed Search Biointelligence Lab School of Computer Sci. & Eng. Seoul National University

  2. Outline • Search Space Graphs • Breadth-First Search • Depth-First Search • Iterative Deepening (c) 2009 SNU CSE Biointelligence Lab

  3. Search in State Spaces - Motivating Examples • Search for the optimal weight in training a multi-layer perceptron • Search for the optimal solution by evolutional computation ex) wall-following robot (c) 2009 SNU CSE Biointelligence Lab

  4. Search in State Spaces - Motivating Examples • Agents that plan • State + action • State-space search is used to plan action sequences • ex) moving a block (Ch. 7) A state-space graph (The size of the search space is small) (c) 2009 SNU CSE Biointelligence Lab

  5. 1. Formulating the State Space • For huge search space we need • Careful formulation • Implicit representation of large search graphs • Efficient search method (c) 2009 SNU CSE Biointelligence Lab

  6. 1. Formulating the State Space (Cont’d) • e.g.) 8-puzzle problem • state description • 3-by-3 array: each cell contains one of 1-8 or blank symbol • two choices of describing state transitions • 84 moves: one of 1-8 numbers moves up, down, right, or left • 4 moves: one black symbol moves up, down, right, or left goal start (c) 2009 SNU CSE Biointelligence Lab

  7. 1. Formulating the State Space (Cont’d) • The number of nodes in the state-space graph: • 9! ( = 362,880 ) • State space for 8-puzzle is • Divided into two separate graphs : not reachable from each other (c) 2009 SNU CSE Biointelligence Lab

  8. 2. Components of Implicit State-Space Graphs • 3 basic components to an implicit representation of a state-space graph • Description of start node • Operators • Functions of state transformation • Models of the effects of actions • 3. Goal condition • True-False valued function • A list of actual instances of state descriptions (c) 2009 SNU CSE Biointelligence Lab

  9. 2. Components of Implicit State-Space Graphs • Two broad classes of search process • Uninformed search: • no problem specific reason to prefer one part of the search space to any other • Apply operators to nodes without using any special knowledge about the problem domain • Heuristic search • existence of problem-specific information to help focus the search • Using evaluation functions • Ex) A* algorithm (c) 2009 SNU CSE Biointelligence Lab

  10. Uninformed Search for Graphs • Breadth-First Search • Depth-First Search • Iterative Deepening • State-space search tree • Each node represents a problem or sub-problem • Root of tree: initial problem to be solved • Children of a node created by adding constraints (c) 2009 SNU CSE Biointelligence Lab

  11. 3. Breadth-First Search • Procedure 1. Apply all possible operators (successor function) to the start node. 2. Apply all possible operators to all the direct successors of the start node. 3. Apply all possible operators to their successors till good node found. • Expanding : applying successor function to a node (c) 2009 SNU CSE Biointelligence Lab

  12. Figure 8.2 Breadth-First Search of the Eight-Puzzle (c) 2009 SNU CSE Biointelligence Lab

  13. 3. Breadth-First Search (Cont’d) • Advantage • When a goal node is found, we have found a path of minimal length to the goal • Disadvantage • Requires the generation and storage of a tree whose size is exponential to the depth of the shallowest goal node • Uniform-cost search [Dijkstra 1959] • A variant of BFS: expansion by equal cost (not equal depth) • If costs of all arcs are identical, same as BFS (c) 2009 SNU CSE Biointelligence Lab

  14. 4. Depth-First or Backtracking Search • Procedure • Generates the successor of a node just one at a time. • Trace is left at each node to indicate that additional operators can be applied there if needed. • At each node a decision must be made about which operator to apply first, which next, and so on. • Repeats this process until the depth bound. • chronological Backtrack when search depth is depth bound. (c) 2009 SNU CSE Biointelligence Lab

  15. 4. Depth-First or Backtracking Search (Cont’d) • 8-puzzle example • Depth bound: 5 • Operator order: left  up  right  down Figure 8.3 Generation of the First Few Nodes in a Depth-First Search (c) 2009 SNU CSE Biointelligence Lab

  16. 4. Depth-First or Backtracking Search (Cont’d) • The graph when the goal is reached in depth-first search (c) 2009 SNU CSE Biointelligence Lab

  17. 4. Depth-First or Backtracking Search (Cont’d) • Advantage • Low memory size: linear in the depth bound • saves only that part of the search tree consisting of the path currently being explored plus traces • Disadvantage • No guarantee for the minimal state length to goal state • The possibility of having to explore a large part of the search space (c) 2009 SNU CSE Biointelligence Lab

  18. 5. Iterative Deepening • Advantage • Linear memory requirements of depth-first search • Guarantee for goal node of minimal depth • Procedure • Successive depth-first searches are conducted – each with depth bounds increasing by 1 (c) 2009 SNU CSE Biointelligence Lab

  19. 5. Iterative Deepening (Cont’d) Figure 8.5 Stages in Iterative-Deepening Search (c) 2009 SNU CSE Biointelligence Lab

  20. 5. Iterative Deepening (Cont’d) • The number of expanded nodes • In case of breadth-first search • In case of iterative deepening search (in the worst case) (c) 2009 SNU CSE Biointelligence Lab

  21. 5. Iterative Deepening (Cont’d) • For large d the ratio Nid/Ndfis b/(b-1) • For a branching factor of 10 and deep goals, 11% more nodes expansion in iterative-deepening search than breadth-first search • Related technique iterative broadening is useful when there are many goal nodes (c) 2009 SNU CSE Biointelligence Lab

  22. 6. Additional Readings and Discussion • Various improvements in chronological backtracking • Dependency-directed backtracking [Stallman & Sussman 1977] • Backjumping [Gaschnig 1979] • Dynamic backtracking [Ginsberg 1993] (c) 2009 SNU CSE Biointelligence Lab

More Related