1 / 207

Chapter 03

Artificial Intelligence. Chapter 03. Solving problems by searching. Outline. Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms [Slides 60-134]. Search. Search permeates all of AI What choices are we searching through?

agowdy
Download Presentation

Chapter 03

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Artificial Intelligence Chapter 03 Solving problems by searching

  2. Outline • Problem-solving agents • Problem types • Problem formulation • Example problems • Basic search algorithms [Slides 60-134]

  3. Search • Search permeates all of AI • What choices are we searching through? • Problem solvingAction combinations (move 1, then move 3, then move 2...) • Natural language Ways to map words to parts of speech • Computer vision Ways to map features to object model • Machine learning Possible concepts that fit examples seen so far • Motion planning Sequence of moves to reach goal destination • An intelligent agent is trying to find a set or sequence of actions to achieve a goal • This is a goal-based agent

  4. Fig 3.1 A simple problem-solving agent. It first formulates a goal and a problem, searches for a sequence of actions that would solve the problem, and then executes the actions one at a time. When this is complete, it formulates another goal and starts over.

  5. Problem-solving agents (goal-based agents) function SIMPLE-PROBLEM-SOLVING-AGENT(percept) returns an action persistent: seq, an action sequence, initially empty state, some description of the current world state goal, a goal, initially null problem, a problem formulation state UPDATE-STATE(state, percept) ifseq is empty then do goal FORMULATE-GOAL(state) problem FORMULATE-PROBLEM(state, goal) seq SEARCH(problem) ifseq = failurethen return a null action action FIRST(seq) seq REST(seq) returnaction Fig 3.1 A simple problem-solving agent. It first formulates a goal and a problem, searches for a sequence of actions that would solve the problem, and then executes the actions one at a time. When this is complete, it formulates another goal and starts over.

  6. Problem-solving Agent: Simplified Description SimpleProblemSolvingAgent(percept) state = UpdateState(state, percept) if sequence is empty then goal = FormulateGoal(state) problem = FormulateProblem(state, goal) sequence = Search(problem) action = First(sequence) sequence = Rest(sequence) return action

  7. Example: Romania • On holiday in Romania; agent currently in Arad. • Flight leaves tomorrow from Bucharest • Find a short route to derive to Bucharest. • Formulate goal: • be in Bucharest • Formulate problem: • states: various cities • actions: operators drive between pairs of cities • Find solution: • Find a sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest, that leads from the current state to a state meeting the goal condition In general, an agent with several immediate options of unknown value (that is the environment is unknown) can decide what to do by first examining future actions that eventually lead to states of known value.

  8. Example: Fig 3.2 A simple road map of part of Romania

  9. Specifying the task environments – recalled PEAS Description

  10. Specifying the task environments – for the Problem-Solving agent • Task environments and their characters for a problem-solving agent.

  11. Assumptions The geography of Romania is static. The paths from the city of Arad to the city of Bucharest are static. Thus, we assume that the task environment for the problem-solving agent is static (without any change). Environment is static Static or dynamic?

  12. Specifying the task environments – recalled • Task environments and their characters for a problem-solving agent.

  13. Assumptions Once adopted the goal of driving to Bucharest, the agent is considering where to go from Arad. If the environment is unknown the agent will not know which of its possible actions (which path) is best. Then the agent has no choice to try one of the actions of unknown value at random. Suppose the agent has a map of Romania. The map is to provide the agent with the information about the states (cities, directions, distances between cities) and the actions it can take. Using the information provided by the map, the agent can decide what to do by “examining future actions” that eventually lead to states of known value. Suppose that each city on the map has sign indicating its arrival at the city for the drives. So the agent knows the current state. Thus, we assume that the environment is fully observable. Environment is fully observable Static or dynamic? Fully or partially observable?

  14. Specifying the task environments – recalled • Task environments and their characters for a problem-solving agent.

  15. Assumptions In Romania, each city is connected to a small number of other cities. The map displays only finitely many actions to choose from at any given state. Assuming the environment is known, the agent knows which states are reached by each action. We assume the environment is discrete, so at any given state there are only finite many actions to choose from. Environment is discrete Static or dynamic? Fully or partially observable? Discrete or continuous?

  16. Specifying the task environments – recalled • Task environments and their characters for a problem-solving agent.

  17. Assumptions In Romania, under the ideal condition, if one chooses to drive from Arad to Sibiu, it does end up in Sibiu. (Conditions are not always ideal.) Thus, we assume that the task environment of the agent is deterministic. Environment is deterministic Static or dynamic? Fully or partially observable? Discrete or continuous? Deterministic or stochastic?

  18. Specifying the task environments – recalled • Task environments and their characters for a problem-solving agent.

  19. Assumptions Under these assumptions, the solution to any problem is a fixed sequence of actions. If the agent knows the initial state and the environment is known and deterministic, it knows exactly where it will be after the first action and what it will perceive. Since only one percept is possible after the first action, the solution can specify only one possible second action, and so for. Thus, the task environment for the agent is sequential. Environment is sequential Static or dynamic? Fully or partially observable? Discrete or continuous? Deterministic or stochastic? Episodic or sequential?

  20. Specifying the task environments – recalled • Task environments and their characters for a problem-solving agent.

  21. Assumptions An agent solving this problem by itself is clearly a single-agent environment Environment is single agent Static or dynamic? Fully or partially observable? Discrete or continuous? Deterministic or stochastic? Episodic or sequential? Single agent or multiple agent?

  22. Specifying the task environments – recalled • Task environments and their characters for a problem-solving agent.

  23. Problem types • Deterministic, fully observable  single-state problem • Agent knows exactly which state it will be in; solution is a sequence • Non-observable  conformant problem (sensorless problem) • Agent may have no idea where it is; solution (if any) is a sequence • Nondeterministic and/or partially observable contingency problem • percepts provide new information about current state • Solution is a tree or policy • often interleave search, execution • Unknown state space  exploration problem (“online”)

  24. Single-state problem formulation

  25. Search Example - A* (A-star) Informed Search Strategy - Romania Map Example Formulate goal: Be in Bucharest. Formulate problem: states are cities, operators drive between pairs of cities Find solution: Find a sequence of cities (e.g., Arad, Sibiu, Fagaras, Bucharest) that leads from the current state to a state meeting the goal condition Fig 3.2 A Simplified road map of part of Romania

  26. Search Space Definitions • State • A description of a possible state of the world • Includes all features of the world that are pertinent to the problem • Initial state • Description of all pertinent aspects of the state in which the agent starts the search • Goal test • Conditions the agent is trying to meet (e.g., have $1M) • Goal state • Any state which meets the goal condition • Thursday, have $1M, live in NYC • Friday, have $1M, live in Valparaiso • Action • Function that maps (transitions) from one state to another

  27. Search Space Definitions • Problem formulation • Describe a general problem as a search problem • Solution • Sequence of actions that transitions the world from the initial state to a goal state • Solution cost (additive) • Sum of the cost of operators • Alternative: sum of distances, number of steps, etc. • Search • Process of looking for a solution • Search algorithm takes problem as input and returns solution • We are searching through a space of possible states • Execution • Process of executing sequence of actions (solution)

  28. Problem Formulation A search problem is defined by the Initial state (e.g., Arad) Operators (e.g., Arad -> Zerind, Arad -> Sibiu, etc.) Goal test (e.g., at Bucharest) Solution cost (e.g., path cost)

  29. Problem Solving • Consider the simpler cases in which the following holds. • The agent’s world (environment) is representable by a discrete set of states. • The agent’s actions are representable by a discrete set of operators. • The world is static and deterministic.

  30. Example: the vacuum world • Single-state, • start in #5. • Solution?Comment: Vacumn moves Right then Suck.

  31. Example: the vacuum world • Single-state, start in #5. Solution?[Right, Suck] • Conformant (Sensorless), start in {1,2,3,4,5,6,7,8} • e.g., Right goes to {2,4,6,8} Solution?Comments: V can go from 1 to 2, or 3 to 4, or 5 to 6, or 7 to 8. it needs Right, Suck, Left, Suck (bcs V is sensorless.

  32. Example: vacuum world • Sensorless, start in {1,2,3,4,5,6,7,8}e.g., Right goes to {2,4,6,8} Solution?[Right,Suck,Left,Suck] • Contingency • Nondeterministic: Suck may dirty a clean carpet • Partially observable: location, dirt at current location. • Percept: [L, Clean], i.e., start in #5 or #7Solution?

  33. Example: vacuum world • Sensorless, start in {1,2,3,4,5,6,7,8}e.g., Right goes to {2,4,6,8} Solution?[Right,Suck,Left,Suck] • Contingency • Nondeterministic: Suck may dirty a clean carpet • Partially observable: location, dirt at current location. • Percept: [L, Clean], i.e., start in #5 or #7Solution?[Right, if dirt then Suck]

  34. Single-state problem formulation - summary A search problem is defined by four items: • initial state e.g., "at Arad" • Operators/actions or successor functionS(x) = set of action–state pairs • e.g., S(Arad) = {<Arad  Zerind, Zerind>, … } • goal test, can be • explicit, e.g., x = "at Bucharest" • implicit, e.g., NoDirt(x), Checkmate(x) • solution/path cost (additive) • e.g., sum of distances, number of actions(or steps) executed, etc. • c(x,a,y) , assumed to be ≥ 0, is the step cost from x to y by action a. • A solution is a sequence of actions leading from the initial state to a goal state

  35. Selecting a state space - summary • Real world is absurdly complex  state space must be abstracted for problem solving • (Abstract) state = set of real states • (Abstract) action = complex combination of real actions e.g., "Arad Zerind" represents a complex set of possible routes, detours, rest stops, etc. • For guaranteed realizability, any real state "in Arad“ must get to some real state "in Zerind“ • Each abstract action should be "easier" than the original problem • (Abstract) solution = set of real paths that are solutions in the real world

  36. Vacuum world state space graph • states? • actions? • goal test? • path cost? Fig 3.3 The state space for the vacuum world. Links denote actions: L = Left, R = Right, S = Suck.

  37. Vacuum world state space graph • states?integer dirt and robot location (ignore dirt amounts) • actions?Left, Right, Suck, NoOp • goal test?no dirt at all locations • path cost?1 per action (0 for NoOp)

  38. Formulating Problem as a Graph • In the graph • each node represents a possible state; • a node is designated as the initial state; • one or more nodes represent goal states, states in which the agent’s goal is considered accomplished. • each edge represents a state transition caused by a specific agent action; • associated to each edge is the cost of performing that transition.

  39. Search Graph • How do we reach a goal state? • initial state • 4 4 • 3 • 7 • 5 2 5 • 4 • 2 3 goal states • There may be several possible ways. Or none! • Factors to consider: • cost of finding a path; • cost of traversing a path. C A B F S G D E

  40. Problem Solving as Search • Search space: set of states reachable from an initial state S0 via a (possibly empty/finite/infinite) sequence of state transitions. • To achieve the problem’s goal • search the space for a (possibly optimal) sequence of transitions starting from S0 and leading to a goal state; • execute (in order) the actions associated to each transition in the identified sequence. • Depending on the features of the agent’s world the two steps above can be interleaved

  41. Problem Solving as Search • Reduce the original problem to a search problem. • A solution for the search problem is • a path initial state goal state. • The solution for the original problem is: • either the sequence of actions associated with the path • or the description of the goal state.

  42. 2 8 3 1 6 4 7 5

  43. Example: The 8-puzzle Go from state S to state G. (S) (G)

  44. The 8-puzzle – Successor Function The successor function is knowledge about the 8-puzzle game, but it does not tell us which outcome to use, nor to which state of the board to apply it. SUCC(state) subset of states Search is about the exploration of alternatives

  45. Example: The 8-puzzle L R D L R U L R L U D D L U D U R R

  46. Formulating the 8-puzzle Problem • States: configurations of tiles • Operators: move one tile Up/Down/Left/Right • There are 9! = 362,880 possible states (all permutations of { □, 1, 2, 3, 4, 5, 6, 7, 8}). • There are 16! possible states for 15-puzzle. • Not all states are directly reachable from a given state. (In fact, exactly half of them are reachable from a given state.) • How can an artificial agent represent the states and the state space for this problem?

  47. Problem Formulation • Choose an appropriate data structure to represent the world states. • Define each operator as a precondition/effects pair where the • precondition holds exactly in the states the operator applies to, • effects describe how a state changes into a successor state by the application of the operator. • Specify an initial state. • Provide a description of the goal (used to check if a reached state is a goal state).

  48. 2 8 3 Formulating the 8-puzzle Problem 1 6 4 7 5 States: each represented by a 3 × 3 array of numbers in [0 . . . 8], where value 0 is for the empty cell. becomes A = 2 8 3 1 6 4 7 0 5

  49. Formulating the 8-puzzle Problem Operators: 24 operators of the form Op(r,c,d) where r, c ∈ {1, 2, 3}, d ∈ {L, R, U, D}. Op(r,c,d) moves the empty space at position (r, c) in the direction d. 2 8 3 2 8 3 1 6 4 1 6 4 7 0 5 0 7 5 Op(3,2,L)

  50. Preconditions and Effects • Example: Op(3,2,R) • 2 8 3 2 8 3 • 1 6 4 1 6 4 • 7 0 5 7 5 0 • Preconditions: A[3, 2] = 0 • A[3, 2] ← A[3, 3] • Effects: • A[3, 3] ← 0 • We have 24 operators in this problem formulation . . . 20 too many! Op(3,2,R)

More Related