1 / 33

CS 2710, ISSP 2610 Foundations of Artificial Intelligence

CS 2710, ISSP 2610 Foundations of Artificial Intelligence. Solving Problems by Searching Chapter 3. Framework. AI is concerned with the creation of artifacts that… Do the right thing Given their circumstances and what they know. Ideal Rational Agents.

saber
Download Presentation

CS 2710, ISSP 2610 Foundations of Artificial Intelligence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 2710, ISSP 2610Foundations of Artificial Intelligence Solving Problems by Searching Chapter 3

  2. Framework • AI is concerned with the creation of artifacts that… • Do the right thing • Given their circumstances and what they know

  3. Ideal Rational Agents “… should take whatever action is expected to maximize its performance measure on the basis of its percept sequence and whatever built-in knowledge it has” Key points: • Performance measure • Actions • Percept sequence • Built-in knowledge

  4. Rational agents How to design this? Sensors percepts Environ ment ? Agent actions Effectors

  5. Goal-based Agents Agents that take actions in the pursuit of a goal or goals.

  6. Goal-based Agents • What should a goal-based agent do when none of the actions it can currently perform results in a goal state? • Choose an action that appears to lead to a state that is closer to a goal than the current one is.

  7. Problem Solving as Search One way to address these issues is to view goal-attainment as problem solving, and viewing that as a search through a state space. In chess, e.g., a state is a board configuration

  8. Problem Solving A problem is characterized as: • An initial state • A set of actions • A goal test • A cost function

  9. Problem Solving A problem is characterized as: • An initial state • A set of actions • successors: state  set of states • A goal test • goalp: state  true or false • A cost function • edgecost: edge between states  cost

  10. Example Problems • Toy problems (but sometimes useful) • Illustrate or exercise various problem-solving methods • Concise, exact description • Can be used to compare performance • Examples: 8-puzzle, 8-queens problem, Cryptarithmetic, Vacuum world, Missionaries and cannibals, simple route finding • Real-world problem • More difficult • No single, agreed-upon description • Examples: Route finding, Touring and traveling salesperson problems, VLSI layout, Robot navigation, Assembly sequencing

  11. Toy ProblemsThe vacuum world • The vacuum world • The world has only two locations • Each location may or may not contain dirt • The agent may be in one location or the other • 8 possible world states • Three possible actions: Left, Right, Suck • Goal: clean up all the dirt 1 2 3 4 6 5 7 8

  12. R S S L R R S S L L R L Toy ProblemsThe vacuum world • States: one of the 8 states given earlier • Operators: move left, move right, suck • Goal test: no dirt left in any square • Path cost: each action costs one

  13. Toy ProblemsMissionaries and cannibals • Missionaries and cannibals • Three missionaries and three cannibals want to cross a river • There is a boat that can hold two people • Cross the river, but make sure that the missionaries are not outnumbered by the cannibals on either bank • Needs a lot of abstraction • Crocodiles in the river, the weather and so on • Only the endpoints of the crossing are important • Only two types of people

  14. Toy ProblemsMissionaries and cannibals • Problem formulation • States: ordered sequence of three numbers representing the number of missionaries, cannibals and boats on the bank of the river from which they started. The start state is (3, 3, 1) • Operators: take two missionaries, two cannibals, or one of each across in the boat • Goal test: reached state (0, 0, 0) • Path cost: number of crossings

  15. Real-world problems • Route finding • Specified locations and transition along links between them • Applications: routing in computer networks, automated travel advisory systems, airline travel planning systems • Touring and traveling salesperson problems • “Visit every city on the map at least once and end in Bucharest” • Needs information about the visited cities • Goal: Find the shortest tour that visits all cities • NP-hard, but a lot of effort has been spent on improving the capabilities of TSP algorithms • Applications: planning movements of automatic circuit board drills

  16. Real-world problems • VLSI layout • Place cells on a chip so that they do not overlap and that there is room for connecting wires to be placed between the cells • Robot navigation • Generalization of the route finding problem • No discrete set of routes • Robot can move in a continuous space • Infinite set of possible actions and states • Additional problem: errors in sensor readings and motor controls • Assembly sequencing • Automatic assembly of complex objects • The problem is to find an order in which to assemble the parts of some object • Wrong order: some of the previous work needs to be undone

  17. What is a Solution? • A sequence of actions that when performed will transform the initial state into a goal state (e.g., the sequence of actions that gets the missionaries safely across the river) • Or sometimes just the goal state (e.g., infer molecular structure from mass spectrographic data)

  18. Our Current Framework • Backtracking state-space search • Others: • Constraint-based search • Optimization search • Adversarial search

  19. Initial Assumptions • The agent knows its current state • Only the actions of the agent will change the world • The effects of the agent’s actions are known and deterministic All of these are defeasible… likely to be wrong in real settings.

  20. Another Assumption • Searching/problem-solving and acting are distinct activities • First you search for a solution (inyour head) then you execute it

  21. Generalized Search Start by adding the initial state to a list, called fringe Loop If there are no states left then fail Otherwise remove a node from fringe, cur If it’s a goal state return it Otherwise expand it and add the resulting nodes to fringe Expand a node = generate its successors

  22. Evaluation Criteria • Completeness • Does it find a solution when one exists • Time • The number of nodes generated during search

  23. Search Criteria • Space • Maximum number of nodes in memory at one time • Optimality • Does it always find a least-cost solution? • Cost considered: sum of the edgecosts of the path to the goal (the g-val)

  24. Time and Space Complexity Measured in Terms of • b – maximum branching factor of the search tree • d – depth of the shallowest goal • m – maximum depth of any path in the state space (may be infinite)

  25. Blind Search Strategies • Let’s define and evaluate the following algorithms • Breadth-first search • Uniform-cost search • Depth-first search • (On the board)

  26. Generalized Search Start by adding the initial state to a list, called fringe Loop If there are no states left then fail Otherwise remove a node from fringe, cur If it’s a goal state return it Otherwise expand it and add the resulting nodes to fringe Expand a node = generate its successors

  27. Implementation issues • Nodes versus states • Search a tree or general graph: checking for duplicate states

  28. def treesearch (qfun,fringe): “””qfun: queuing function fringe: a list containing the initial node””” while len(fringe) > 0: cur = fringe[0] fringe = fringe[1:] if goalp(cur): return cur fringe = qfun(makeNodes(successors(cur)),fringe) return []

  29. def graphsearch (qfun,fringe): expanded = {} while len(fringe) > 0: cur = fringe[0] fringe = fringe[1:] if goalp(cur): return cur if not (expanded.has_key(cur.state) and\ expanded[cur.state].gval <= cur.gval): expanded[cur.state] = cur fringe = qfun(makeNodes(successors(cur)),fringe) return []

  30. def iterativeDeepening(start): result = [] depthlim = 1 startnode = Node(start) while not result: result = depthLimSearch([startnode],depthlim) depthlim = depthlim + 1 return result def depthLimSearch (fringe,depthlim): while len(fringe) > 0: cur = fringe[0] fringe = fringe[1:] if goalp(cur): return cur if cur.depth <= depthlim: fringe = makeNodes(successors(cur)) + fringe return []

  31. Bidirectional search • Search forward from the initial state and backward from the goal • Stop when the two searches meet in the middle Start Goal

  32. Bidirectional SearchGo from Timisoara to Bucharest Oradea Neamt Iasi Zerind Sibiu Fagaras Arad Vaslui Rimnicu Vilcea Timisoara Urziceni Hirsova Lugoj Pitesti Mehadia Bucharest Eforie Dobreta Craiova Giurgiu

  33. Bidirectional search • Bidirectional search merits • Big difference for problems with branching factor b in both directions • A solution of length d will be found in O(2bd/2) = O(bd/2) • For b = 10 and d = 6, only 2,222 nodes are needed instead of 1,111,111 for breadth-first search • Bidirectional search issues • Predecessorsof a node need to be generated • Difficult when operators are not reversible • For each node: check if it appeared in the other search • Needs a hash table of O(bd/2)

More Related