1 / 55

CS 4700: Foundations of Artificial Intelligence

CS 4700: Foundations of Artificial Intelligence. Prof. Carla P. Gomes gomes@cs.cornell.edu Module: Search I (Reading R&N: Chapter 3). Outline. Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms. Problem-solving agents.

minty
Download Presentation

CS 4700: Foundations of Artificial Intelligence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 4700:Foundations of Artificial Intelligence Prof. Carla P. Gomes gomes@cs.cornell.edu Module: Search I (Reading R&N: Chapter 3)

  2. Outline • Problem-solving agents • Problem types • Problem formulation • Example problems • Basic search algorithms

  3. Problem-solving agents • Problem solving agents are Goal-based Agents. • Goal Formulation  set of (desirable) world states. • 2. Problem formulation  what actions and states to consider given a goal. • 3. Search for solution  given the problem, search for a solution, a sequence of actions to achieve the goal. • 4. Execution of the solution

  4. Example: Romania • On holiday in Romania; currently in Arad. • Flight leaves tomorrow from Bucharest • Formulate goal: • be in Bucharest • Formulate problem: • actions: drive between cities • states: various cities • Find solution: • sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest • Execution • drive from Arad to Bucharest according to the solution Initial State Goal State Environment: fully observable (map), deterministic, and the agent knows effects of each action. Is this really the case?

  5. Problem-solving agents(Goal-based Agents)

  6. Increasing complexity Problem types • Deterministic, fully observablesingle-state problem • Agent knows exactly which state it will be in; solution is a sequence actions. • Non-observable sensorless problem (conformant problem) • Agent may have no idea where it is (no sensors); it reasons in terms of belief states; solution is a sequence actions (effects of actions certain). • Nondeterministic and/or partially observable contingency problem • Actions uncertain, percepts provide new information about current state • (adversarial problem if uncertainty comes from other agents) • Unknown state space and uncertain action effects exploration problem

  7. Example: vacuum world • Single-state, start in #5. Solution?

  8. Example: vacuum world • Single-state, start in #5. Solution?[Right, Suck] • Sensorless, start in {1,2,3,4,5,6,7,8}e.g., Right goes to {2,4,6,8} Solution? Goal: 7 or 8

  9. Example: vacuum world • Sensorless, start in {1,2,3,4,5,6,7,8}e.g., Right goes to {2,4,6,8} Solution?[Right,Suck,Left,Suck] • Contingency • Nondeterministic: Suck may dirty a clean carpet • Partially observable: location, dirt at current location. • Percept: [L, Clean], i.e., start in #5 or #7Solution?

  10. Example: vacuum world • Sensorless, start in {1,2,3,4,5,6,7,8}e.g., Right goes to {2,4,6,8} Solution?[Right,Suck,Left,Suck] • Contingency • Nondeterministic: Suck may dirty a clean carpet • Partially observable: location, dirt at current location. • Percept: [L, Clean], i.e., start in #5 or #7Solution?[Right, if dirt then Suck]

  11. Single-state problem formulation • A problem is defined by four items: • Initial state e.g., "at Arad" • Actions available to the agent • Common description  successor functionS(x) = set of (action–state) pairs • e.g., S(Arad) = {<Arad  Zerind, Zerind>, <Arad  Sibiu, Sibiu>, … } • Note: • State space – graph in which the nodes are states and arcs between nodes are actions. • Initial state + successor function implicitly define state space. • Path in a state space is a sequence of states connected by a sequence of actions.

  12. Single-state problem formulation • Goal test, which determines whether a given state is the goal state. • Can be • explicit, e.g., x = "at Bucharest" • implicit, e.g., Checkmate(x) • 4. Path cost, which assigns a numeric cost to each path, reflecting performance measure of agent. • e.g., sum of distances, number of actions executed, etc. • Step cost of taking action a to go from state x to state y  c(x,a,y); assumed to be ≥ 0, and additive. • A solution is a path (sequence of actions) leading from the initial state to a goal state. Optimal solution has the lowest path cost among all solutions.

  13. Vacuum-cleaner world • Percepts: location and contents, e.g., [A, Dirty] • Actions: Left, Right, Suck • states? • actions? • goal test? • path cost?

  14. Vacuum world state space graph • states?The agent is in one of n (2) locations, with or without dirt. (nx2n) • actions?Left, Right, Suck • goal test?no dirt at all locations? • path cost?1 per action

  15. Example: The 8-puzzle • states? • actions? • goal test? • path cost?

  16. Example: The 8-puzzle • states?locations of tiles • actions?move blank left, right, up, down • goal test?= goal state (given) • path cost? 1 per move • [Note: optimal solution of n-Puzzle family is NP-hard]

  17. Example: robotic assembly • states?: real-valued coordinates of robot joint angles parts of the object to be assembled • actions?: continuous motions of robot joints • goal test?: complete assembly • path cost?: time to execute

  18. Real World Problems • Route finding problems; • Touring problems (end up in the same city)  Traveling Salesman Problem • VLSI layout  positioning millions of components and connections on a chip to minimize area, circuit delays, etc. • Robot navigation • Automatic assembly of complex objects • Protein design – sequence of amino acids that will fold into the 3-dimensional protein with the right properties.

  19. Now that we have formulated the problem how do we find a solution? • Search through the state space. • We will consider search techniques that use an explicit search tree that is generated by the initial state + successor function initialize (initial node) Loop choose a node for expansion according to strategy; goal node?  done expand node with successor function

  20. Search is a central topic in AI. • Originated with Newell and Simon’s work on problem solving; • Human Problem Solving (1972). • Automated reasoning is a natural search task. • More recently: Given that almost all AI formalisms • (planning, learning, etc) are NP-Complete or worse, • Some form of search is generally unavoidable • (i.e., no smarter algorithm available).

  21. Tree search example

  22. Tree search example

  23. Tree search example

  24. Implementation: states vs. nodes • A state is a --- representation of --- a physical configuration • A node is a data structure constituting part of a search tree includes state, tree parent node, action (applied to parent),path cost (initial state to node) g(x), depth • The Expand function creates new nodes, filling in the various fields and using the SuccessorFnof the problem to create the corresponding states. Fringe is the collection of nodes that have been generated but not expanded (queue  different types of queues insert elements in different orders); Each node of the fringe is a leaf node.

  25. Search strategies • A search strategy is defined by picking the order of node expansion • Strategies are evaluated along the following dimensions: • completeness: does it always find a solution if one exists? • time complexity: number of nodes generated • space complexity: maximum number of nodes in memory • optimality: does it always find a least-cost solution? • Time and space complexity are measured in terms of • b: maximum branching factor of the search tree • d: depth of the least-cost solution • m: maximum depth of the state space (may be ∞)

  26. Uninformed search strategies • Uninformed(blind) search strategies use only the information available in the problem definition: • Breadth-first search • Uniform-cost search • Depth-first search • Depth-limited search • Iterative deepening search • Bidirectional search

  27. Breadth-first search • Expand shallowest unexpanded node • Implementation: • fringe is a FIFO queue, i.e., new successors go at end

  28. Breadth-first search • Expand shallowest unexpanded node • Implementation: • fringe is a FIFO queue, i.e., new successors go at end

  29. Breadth-first search • Expand shallowest unexpanded node • Implementation: • fringe is a FIFO queue, i.e., new successors go at end

  30. Breadth-first search • Expand shallowest unexpanded node • Implementation: • fringe is a FIFO queue, i.e., new successors go at end

  31. Properties of breadth-first search • Complete?Yes (if b is finite) • Time?1+b+b2+b3+… +bd + b(bd-1) = O(bd+1) • Space?O(bd+1) (keeps every node in memory) • Optimal? Yes (if all step costs are identical) • Space is the bigger problem (more than time)

  32. Uniform-cost search • Expand least-cost unexpanded node (instead of shallowest) (1) • Implementation: • fringe = queue ordered by path cost • (1) Equivalent to breadth-first if step costs all equal • Complete? Yes, if b is finite, step cost ≥ ε(>0) • Time? # of nodes with g ≤ cost of optimal solution (C*),O(b(1+C*/ ε)) • Space? # of nodes with g≤ cost of optimal solution, O(b(1+  C*/ ε )) • Optimal? Yes – nodes expanded in increasing order of g(n) (1) Same as breadth-first search is step costs are equal

  33. Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front

  34. Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front

  35. Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front

  36. Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front

  37. Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front

  38. Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front

  39. Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front

  40. Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front

  41. Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front

  42. Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front

  43. Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO (stack) queue, i.e., put successors at front

  44. Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front

  45. Properties of depth-first search • Complete? No: fails in infinite-depth spaces, spaces with loops • Modify to avoid repeated states along path  complete in finite spaces • Time?O(bm): terrible if m is much larger than d • but if solutions are dense, may be much faster than breadth-first • Space?O(bm), i.e., linear space! • Optimal? No m – maximum depth of state sapce. d – depth of shallowest (least cost) solution Note: In backtracking search only one successor is generated  only O(m) memory is needed; also successor is modification of the current state, but we have to be able to undo each modification.

  46. Iterative deepening search Why would one do that? Combine good memory requirements of depth-first with the completeness of breadth-first when branching factor is finite and its optimality when the path cost is a non-decreasing function of the depth of the node.

  47. Iterative deepening search l =0

  48. Iterative deepening search l =1

  49. Iterative deepening search l =2

  50. Iterative deepening search l =3

More Related