1 / 97

Solving problems by searching

This chapter provides an overview of problem-solving agents, problem types, problem formulation, example problems, and basic search algorithms. It discusses the use of goal-based agents and explores different search algorithms for solving problems.

boydb
Download Presentation

Solving problems by searching

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Solving problems by searching Chapter 3

  2. Outline • Problem-solving agents • Problem types • Problem formulation • Example problems • Basic search algorithms

  3. Which type of agent will be used and why? goal-based agent called a problem-solving agent

  4. Problem-solving agents • atomic representations : no internal states. • structured representations are usually called planning agen • Problem-solving begins with precise definitions of problems and their solutions • general-purpose search algorithms that can be used to solve these problems: • uninformed search algorithms that are given no information about the problem other than its definition • Informed search algorithms, on the other hand, can do quite well given some guidance on where to look for solutions

  5. An agent with several immediate options of unknown value can decide what to do by first examining future actions that eventually lead to states of known value. So we have to be more specific about properties of the environment • environment is observable, so the agent always knows the current state. • environment is discrete. so at any given state there are only finitely many actions to choose from. • environment is known, so the agent knows which states are reached by each action. • environment is deterministic, so each action has exactly one outcome.

  6. The process of looking for a sequence of actions that reaches the goal is called search. • A search algorithm takes a problem as input and returns a solution in the form of an action sequence. • So, search is the process of looking for a sequence of actions that reaches the goal. • Once a solution is found, the actions it recommends can be carried out this is called the execution phase • it ignores its percepts when choosing an action because it knows in advance what they will be, this called open-loop system.

  7. Problem-solving agents

  8. Example: Romania • On holiday in Romania; currently in Arad. • Flight leaves tomorrow from Bucharest • Formulate goal: • be in Bucharest • Formulate problem: • states: various cities • actions: drive between cities • Find solution: • sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest

  9. Example: Romania

  10. Goal formulation, based on the current situation and the agent's performance measure, is the first step in problem solving. • Problem formulation is the process of deciding what actions and states to consider, given a goal.

  11. A problem can be defined formally by five components: • The initial state: starts state of agent e.g.: In(Arad). • A description of the possible actions available to the agent given a particular state s, ACTIONS (s) returns the set of actions that can be executed in s. For example, from the state In(Arad), the applicable actions are: { Go(Sibiu), Go(Timisoara), Go(Zerim1)}. • A description of what each action does; the formal name for this is the transition model. We also use the term successor to refer to any state reachable from a given state by a single action ; (s,a). Result(In(Arad), Go(Zeriad)) = In(Zeririd) . 1+2+3= State space

  12. Cont. • The goal test, which determines whether a given state is a goal state. • Sometimes the goal is specified by an abstract property rather than an explicitly enumerated set of states. For example, in chess, the goal is to reach a state called "checkmate," • A path cost function that assigns a numeric cost to each path. • The problem-solving agent chooses a cost function that reflects its own performance measure. The step cost of taking action a in state s to reach state s' is denoted by c (s, a, s'). • Optimal solution has the lowest path cost among all solutions • Note : We assume that step costs are nonnegative.

  13. Problem types • Deterministic, fully observablesingle-state problem • Agent knows exactly which state it will be in; solution is a sequence • Non-observable sensorless problem (conformant problem) • Agent may have no idea where it is; solution is a sequence • Nondeterministic and/or partially observable contingency problem • percepts provide new information about current state • often interleave} search, execution • Unknown state space exploration problem

  14. EXAMPLE PROBLEMS

  15. Example Problems • Toy problems (but sometimes useful) • Illustrate or exercise various problem-solving methods • Concise, exact description • Can be used to compare performance • Examples: 8-puzzle, 8-queens problem, Cryptarithmetic, Vacuum world, Missionaries and cannibals, simple route finding • Real-world problem • More difficult • No single, agreed-upon description • Examples: Route finding, Touring and traveling salesperson problems, VLSI layout, Robot navigation, Assembly sequencing

  16. Toy Problems: The vacuum world • The vacuum world • The world has only two locations • Each location may or may not contain dirt • The agent may be in one location or the other • 8 possible world states • Three possible actions: Left, Right, Suck • Goal: clean up all the dirt 1 2 3 4 6 5 7 8

  17. R S S L R R S S L L R L Toy Problems:The vacuum world • States: one of the 8 states given earlier • Actions: move left, move right, suck • Goal test: no dirt left in any square • Path cost: each action costs one

  18. Example: vacuum world • Single-state, start in #5. Solution?

  19. Example: vacuum world • Single-state, start in #5. Solution?[Right, Suck] • Sensorless, start in {1,2,3,4,5,6,7,8}e.g., Right goes to {2,4,6,8} Solution?

  20. Example: vacuum world • Sensorless, start in {1,2,3,4,5,6,7,8}e.g., Right goes to {2,4,6,8} Solution?[Right,Suck,Left,Suck] • Contingency • Nondeterministic: Suck may dirty a clean carpet • Partially observable: location, dirt at current location. • Percept: [L, Clean], i.e., start in #5 or #7Solution?

  21. Example: vacuum world • Sensorless, start in {1,2,3,4,5,6,7,8}e.g., Right goes to {2,4,6,8} Solution?[Right,Suck,Left,Suck] • Contingency • Nondeterministic: Suck may dirty a clean carpet • Partially observable: location, dirt at current location. • Percept: [L, Clean], i.e., start in #5 or #7Solution?[Right, if dirt then Suck]

  22. Single-state problem formulation A problem is defined by four items: • initial state e.g., "at Arad" • actions or successor functionS(x) = set of action–state pairs • e.g., S(Arad) = {<Arad  Zerind, Zerind>, … • goal test, can be • explicit, e.g., x = "at Bucharest" • implicit, e.g., Checkmate(x) • path cost (additive) • e.g., sum of distances, number of actions executed, etc. • c(x,a,y) is the step cost, assumed to be ≥ 0 • A solution is a sequence of actions leading from the initial state to a goal state

  23. Selecting a state space • Real world is complex  state space must be abstracted for problem solving • (Abstract) state = set of real states (Abstract) action = complex combination of real actions • e.g., "Arad  Zerind" represents a complex set of possible routes, detours, rest stops, etc. • For guaranteed realizability, any real state "in Arad“ must get to some real state "in Zerind" • (Abstract) solution = • set of real paths that are solutions in the real world • Each abstract action should be "easier" than the original problem

  24. Vacuum world state space graph • states? • actions? • goal test? • path cost?

  25. Vacuum world state space graph • states?Dirt location and robot(agent) location • Number of possible states : n*2n • actions?Left, Right, Suck • goal test?no dirt at all locations • path cost?1 per action

  26. Example: The 8-puzzle • states? • actions? • goal test? • path cost?

  27. Example: The 8-puzzle • states?locations of tiles • Number of possible states : 9!/2 • actions?move blank left, right, up, down • goal test?= goal state (given) • path cost? 1 per move [Note: optimal solution of n-Puzzle family is NP-hard]

  28. Example: robotic assembly • states?: real-valued coordinates of robot joint angles parts of the object to be assembled • actions?: continuous motions of robot joints • goal test?: complete assembly • path cost?: time to execute

  29. States: Any arrangement of 0 to 8 queens on the board is a state. • Initial state: No queens on the board. • Actions: Add a queen to any empty square. • Transition model: Returns the board with a queen added to the specified square. • Goal test: 8 queens are on the board, none attacked. • In this formulation, we have 64! possible sequences to investigate. A better formulation would prohibit placing a queen in any square that is already attacked: • States: All possible arrangements of n queens (0 < n < 8), one per column in the leftmost n. columns, with no queen attacking another. • Actions: Add a queen to any square in the leftmost empty column such that it is not attacked by any other queen.

  30. There are two main kinds of formulation. • An incremental formulation involves operators that augment the state description, starting with an empty state; for the 8-queens problem, this means that each action adds a queen to the state. • A complete-state formulation starts with all 8 queens on the board and moves them amend.

  31. Real world Problems • States: Each state obviously includes a location (e.g., an airport) and the current time. And may depend on previous segments, their fare bases, and their status as domestic or international also state must record extra information about these "historical" aspects. • Initial state: This is specified by the user's query. • Actions: Take any flight from the current location, in any seat class, leaving after the current time, leaving enough time for within-airport transfer if needed.

  32. Transition model: The state resulting from taking a flight will have the flight's destination as the current location and the flight's arrival time as the current time. • Goal test: Are we at the final destination specified by the user? • Path cost: This depends on monetary cost, waiting time, flight time, customs and imigration procedures, seat quality, time of day, type of airplane, frequent-flyer mileage awards, and so on.

  33. Tree Search • A general TREE-SEARCH algorithm considers all possible paths to find a solution • GRAPH-SEARCH algorithm avoids consideration of redundant paths.

  34. Tree Search • The possible action sequences starting at the initial state form a search tree with : • initial state at the root; • the branches are actions • the nodes correspond to states in the state space of the problem. • leaf node, that is, a node with no children in the tree. • frontier (Open list) , the set of all leaf nodes available for expansion at any given point

  35. Tree search algorithms • Basic idea: • offline, simulated exploration of state space by generating successors of already-explored states (a.k.a.~expanding states)

  36. Tree search example

  37. Tree search example

  38. Tree search example

  39. Implementation: general tree search

  40. Implementation: states vs. nodes • A state is a (representation of) a physical configuration • A node is a data structure constituting part of a search tree includes state, parent node, action, path costg(x), depth • The Expand function creates new nodes, filling in the various fields and using the SuccessorFn of the problem to create the corresponding states.

  41. A node is a bookkeeping data structure used to represent the search tree. • A state corresponds to a configuration of the world. • Thus, nodes are on particular paths, as defined by PARENT pointers, whereas states are not. Furthermore, two different nodes can contain the same world state if that state is generated via two different search paths.

  42. Infrastructure for search algorithms Search algorithms require a data structure to keep track of the search tree that is being constructed. For each node n of the tree, we have a structure that contains four components: • n. STATE: the state in the state space to which the node corresponds; • n.PARENT: the node in the search tree that generated this node; • n.Action: the action that was applied to the parent to generate the node; • n.PATH-COST: the cost, traditionally denoted by y(n), of the path from the initial state to the node, as indicated by the parent pointers.

  43. Search strategies • A search strategy is defined by picking the order of node expansion • Strategies are evaluated along the following dimensions: • completeness: does it always find a solution if one exists? • time complexity: How long does it take to find a solution?(number of nodes generated) • space complexity: How much memory is needed to perform the search? (maximum number of nodes in memory) • optimality: does it always find a least-cost solution? • Time and space complexity in Graph are measured in terms of • b: maximum branching factor of the search tree, or maximum number of successors of any node • d: depth of the least-cost solution (the number of steps along the path from the root ) • m: maximum depth of the state space (may be ∞)

  44. To assess the effectiveness of a search algorithm, we can consider : • search cost— which typically depends on the time complexity but can also include a term for memory usage— • or we can use the total cost, which combines the search cost and the path cost of the solution found. • E.g.: In Arad to Bucharest, the search cost is the amount of time taken by the search and the solution cost is the total length of the path in kilometers. => milliseconds + kilometers Note: convert Km into seconds.

  45. Uninformed search strategies • uninformed search also called blind search • The strategies have no additional information about states beyond that provided in the problem definition. • they generate successors and distinguish a goal state from a non-goal state.

  46. Uninformed search strategies • Uninformed search strategies use only the information available in the problem definition • Breadth-first search • Uniform-cost search • Depth-first search • Depth-limited search • Iterative deepening search

  47. datatype node components: STATE, PARENT-NODE, OPERATOR, DEPTH, PATH-COST

  48. Breadth-first search • Expand shallowest unexpanded node • All edges have same cost (weight) • the goal test is applied to each node when it is generated, rather than when it is selected for expansion • discards any new path to a state already in the frontier or explored set • Frontier is a FIFO queue, i.e., new successors go at end • Implementation:

  49. Breadth-first search • Expand shallowest unexpanded node • Implementation: • Frontier is a FIFO queue, i.e., new successors go at end

More Related