1 / 98

Introduction to Problem Solving: Search Algorithms in Artificial Intelligence

Learn about problem-solving agents, problem types, problem formulation, and basic search algorithms in AI. Explore uninformed and informed search strategies.

mayrat
Download Presentation

Introduction to Problem Solving: Search Algorithms in Artificial Intelligence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Artificial Intelligence Chapter 03-01 Solving Problems by Searching: Introduction to Problem Solving

  2. Outline • Problem-solving agents • Problem types • Problem formulation • Example problems • Basic search algorithms • Uninformed Search Strategies [Ch 03_02] • Informed Search Strategies [Ch 03-03]

  3. Search • Search permeates all of AI • What choices are we searching through? • Problem solvingAction combinations (move 1, then move 3, then move 2...) • Natural language Ways to map words to parts of speech • Computer vision Ways to map features to object model • Machine learning Possible concepts that fit examples seen so far • Motion planning Sequence of moves to reach goal destination • An intelligent agent is trying to find a set or sequence of actions to achieve a goal • This is a goal-based agent

  4. Fig 3.1 A simple problem-solving agent. It first formulates a goal and a problem, searches for a sequence of actions that would solve the problem, and then executes the actions one at a time. When this is complete, it formulates another goal and starts over.

  5. Problem-solving agents (goal-based agents) function SIMPLE-PROBLEM-SOLVING-AGENT(percept) returns an action persistent: seq, an action sequence, initially empty state, some description of the current world state goal, a goal, initially null problem, a problem formulation state UPDATE-STATE(state, percept) ifseq is empty then do goal FORMULATE-GOAL(state) problem FORMULATE-PROBLEM(state, goal) seq SEARCH(problem) ifseq = failurethen return a null action action FIRST(seq) seq REMAINDER(seq) returnaction Fig 3.1 A simple problem-solving agent. It first formulates a goal and a problem, searches for a sequence of actions that would solve the problem, and then executes the actions one at a time. When this is complete, it formulates another goal and starts over.

  6. Problem-solving Agent: Simplified Description SimpleProblemSolvingAgent(percept) state = UpdateState(state, percept) if sequence is empty then goal = FormulateGoal(state) problem = FormulateProblem(state, goal) sequence = Search(problem) action = First(sequence) sequence = Remainder(sequence) return action

  7. Example: Romania Problem: On holiday in Romania; agent currently in Arad. Flight leaves tomorrow from Bucharest Find a short route to derive to Bucharest. • Formulate goal: • goal: be in Bucharest • Formulate problem: • states: various cities • actions: operators drive between pairs of cities (paths) • Formulate(Find) solution: • Find a sequence of cities (e.g., Arad, Sibiu, Fagaras, Bucharest), that leads from the current state to a state meeting the goal condition In general, an agent with several immediate options of unknown value (that is the environment is unknown) can decide what to do by first examining future actions that eventually lead to states of known value.

  8. Example: Fig 3.2 A simple road map of part of Romania

  9. Specifying the task environments – recalled PEAS Description

  10. Specifying the task environments – for the Problem-Solving agent • Task environments and their characters for a problem-solving agent.

  11. Assumptions • The geography of Romania is static. • The paths from the city of Arad to the city of Bucharest are static. • Thus, we assume that the task environment for the problem-solving agent is static (without any change). Environment is static Static or dynamic?

  12. Specifying the task environments – recalled • Task environments and their characters for a problem-solving agent.

  13. Assumptions • Once adopted the goal of driving to Bucharest, the agent is considering where to go from Arad. • If the environment is unknown • the agent will not know which of its possible actions (paths) is best. • Then, the agent has no choice to try one of the actions (paths) of unknown value at random. Environment is fully observable Static or dynamic? Fully or partially observable?

  14. Assumptions • Suppose the agent has a map of Romania. • The map provides the agent with the information about the states (cities, directions, distances between cities) and the actions (paths) it can take. • Using the map, the agent can decide what to do by “examining future actions” that eventually lead to states of known value. • Suppose that each city has sign indicating its arrival at the city for the drives. • So the agent knows the current state (city). • Thus, we assume that the environment is fully observable. Environment is fully observable Static or dynamic? Fully or partially observable?

  15. Specifying the task environments – recalled • Task environments and their characters for a problem-solving agent.

  16. Assumptions • In Romania’s map, each city is connected to a small number of other cities. • The map displays only finitely many actions (paths) to choose from at any given state (city). • Assuming the environment is known, the agent knows which states (cities) are reached by each action (path). • We assume the environment is discrete, so at any given state (city) there are only finite many actions (paths) to choose from. Environment is discrete Static or dynamic? Fully or partially observable? Discrete or continuous?

  17. Specifying the task environments – recalled • Task environments and their characters for a problem-solving agent.

  18. Assumptions • In Romania, under the ideal condition, if one chooses to drive from Arad to Sibiu, it does end up in Sibiu. (Conditions are not always ideal.) • Thus, we assume that the task environment of the agent is deterministic. Environment is deterministic Static or dynamic? Fully or partially observable? Discrete or continuous? Deterministic or stochastic?

  19. Specifying the task environments – recalled • Task environments and their characters for a problem-solving agent.

  20. Assumptions • Under these assumptions, the solution to any problem is a fixed sequence of actions. • If the agent knows the initial state and the environment is known and deterministic, it knows exactly where it will be after the first action and what it will perceive. • Since only one percept is possible after the first action, the solution can specify only one possible second action, and so for. • Thus, the task environment for the agent is sequential. Environment is sequential Static or dynamic? Fully or partially observable? Discrete or continuous? Deterministic or stochastic? Episodic or sequential?

  21. Specifying the task environments – recalled • Task environments and their characters for a problem-solving agent.

  22. Assumptions An agent solving this problem by itself is clearly a single-agent environment Environment is single agent Static or dynamic? Fully or partially observable? Discrete or continuous? Deterministic or stochastic? Episodic or sequential? Single agent or multiple agent?

  23. Specifying the task environments – recalled • Task environments and their characters for a problem-solving agent.

  24. Problem types • Deterministic, fully observable environment single-state problem • Agent knows exactly which state it will be in. • solution is a sequence of actions • Non-observable environment  conformant problem (sensorless problem) • Agent may have no idea where it is (knows it may be in any of a number of states). • solution (if any) is a sequence of actions. • Nondeterministic and/or partially observable  contingency problem • percepts provide new information about current state • Solution is a tree or policy • often interleave search and execution • Unknown state space  exploration problem (“online”)

  25. Summary: Search Example - Romania Map Example Formulate goal: Be in Bucharest. Formulate problem: states are cities; operators drive between pairs of cities Find solution: Find a sequence of cities (e.g., Arad, Sibiu, Fagaras, Bucharest) that leads from the current state to a state meeting the goal condition Fig 3.2 A Simplified road map of part of Romania

  26. Search Space Definitions • State • A description of a possible state of the world • Includes all features of the world that are pertinent to the problem • Initial state • Description of all pertinent aspects of the state in which the agent starts the search • Goal test • Conditions the agent is trying to meet (e.g., have $1M) • Goal state • Any state which meets the goal condition • Action • Function that maps (transitions) from one state to another

  27. Search Space Definitions • Problem formulation • Describe a general problem as a search problem • Solution • Sequence of actions that transitions the world from the initial state to a goal state • Solution cost (additive) • Sum of the cost of operators • Alternative: sum of distances, number of steps, etc. • Search • Process of looking for a solution • Search algorithm takes problem as input and returns solution • We are searching through a space of possible states • Execution • Process of executing sequence of actions (solution)

  28. Problem Formulation: Romania Map Example A search problem is defined by the Initial state (e.g., Arad) Operators (e.g., Arad -> Zerind, Arad -> Sibiu, etc.) Goal test (e.g., at Bucharest) Solution cost (e.g., path cost)

  29. Problem Solving: Romania Map Example • Consider the simpler cases in which the following holds. • The agent’s world (environment) is representable by a discrete set of states. • The agent’s actions are representable by a discrete set of operators. • The world is static and deterministic.

  30. Example: the vacuum world Single-state problem, • initial start = 5. • goal states = {7, 8} Solution? [Right, Suck]Comment: Vacumn moves Right then Suck.

  31. Example: the vacuum world Conformant (Sensorless) Problem • initial start = {1,2,3,4,5,6,7,8} • Right goes to {2,4,6,8} • Left goes to {1, 3, 5, 7} • Suck goes {5, 4, 7, 8} Solution? [Right, Suck, Left, Suck] Comments: V can go from 1 to 2, or 3 to 4, or 5 to 6, or 7 to 8. it needs Right, Suck, Left, Suck (bcs V is sensorless.

  32. Example: vacuum world Contingency problem • initial state = {5, 7} Percept: [L, Clean], i.e., start in state = 5 or state = 7. • Nondeterministic: Suck may dirty a clean carpet (i.e., Suck occasionally fails). • Partially observable: location, dirt at current location (i.e., Local sensing: dirt, location). • Percept: [L, Clean], i.e., start in state = 5 or state = 7 Solution?[Right, if dirt thenSuck]

  33. Problem Solving • We start by considering the simpler cases in which the environment is • fully observable, static and deterministic. • In such environments the following holds for an agent A: • A’s world is representable by a discrete set of states. • A’s actions are representable by a discrete set of operators. • The next world state is completely determined by the current state and A’s actions. • The world’s state transitions are caused exclusively by A’s actions

  34. Single-State Problem Formulation - Summary Formally, a search problem is defined by four components: • An initial state (e.g., In(Arad) which means "at Arad“) • A successor function (Operators/actions)S(x)= sets of action–state pairs • e.g., S(Arad) = {GoTo(Zerind), In(Zerind) (which could be written as <Arad  Zerind, Zerind>), … } • A goal test, can be • explicit, e.g., x= In(Bucharest) • implicit, e.g., NoDirt(x), Checkmate(x) • A path cost (i.e., solution) (additive) • e.g., sum of distances, number of actions(or steps) executed, etc. • c(x, a, y)is the step cost from x to y by action a assumed to be ≥ 0. A solution is a sequence of actions leading from the initial state to a goal state

  35. Selecting a State Space - Summary Since the real world is absurdly complex, the state space must be abstracted for problem solving • (Abstract) state = set of real states. • (Abstract) action = complex combination of real actions, e.g., GoTo(Zerind), from Arad (i.e., "Arad Zerind“)represents a complex set of possible routes, detours, rest stops, etc. • For guaranteed realizability, any real state In(Arad) must get to some real state In(Zerind) • Each abstract action should be "easier" than the original problem. • (Abstract) solution = set of real paths that are solutions in the real world.

  36. Example: Vacuum world state space graph • states? • actions? • goal test? • path cost? Fig 3.3 The state space for the vacuum world. Links denote actions: L = Left, R = Right, S = Suck.

  37. Example: Vacuum world state space graph • states?dirt flag: integer, robot location (ignore dirt amounts) • actions?Left, Right, Suck, NoOp • goal test?dirt(no dirt at all locations) • path cost?1 per action (0 for NoOp)

  38. Formulating Problem as a Labeled Graph • In the graph • each node represents a possible state; • a node is designated as the initial state; • one or more nodes represent goal states, states in which the agent’s goal is considered accomplished. • each edge represents a state transition caused by a specific agent action; • associated to each edge is the cost of performing that transition.

  39. Search Graph • How do we reach a goal state? • initial state • 4 4 • 3 • 7 • 5 2 5 • 4 • 2 3 goal states • There may be several possible ways. Or none! • Factors to consider: • cost of finding a path; • cost of traversing a path. C A B F S G D E

  40. Problem Solving as Search • Search space: set of states reachable from an initial state S0 via a (possibly empty/finite/infinite) sequence of state transitions. • To achieve the problem’s goal • search the space for a (possibly optimal) sequence of transitions starting from S0 and leading to a goal state; • execute (in order) the actions associated to each transition in the identified sequence. • Depending on the features of the agent’s world (such as, for contingency problems),the two steps above can be interleaved

  41. Problem Solving as Search • Reduce the original problem to a search problem. • A solution for the search problem is • a path initial state goal state. • The solution for the original problem is: • either the sequence of actions associated with the path • or the description of the goal state.

  42. 2 8 3 1 6 4 7 5

  43. Example: The 8-puzzle Problem: Go from state S to state G. (S) (G)

  44. The 8-puzzle – Successor Function The successor function is knowledge about the 8-puzzle game, but it does not tell us which outcome to use, nor to which state of the board to apply it. SUCC(state) subset of states Search is about the exploration of alternatives

  45. Example: The 8-puzzle (S) L R D L R U L R L U D D L U D U R R (G)

  46. Formulating the 8-puzzle Problem • States: configurations of tiles • Operators: move one tile Up/Down/Left/Right • There are 9! = 362,880 possible states (all permutations of { □, 1, 2, 3, 4, 5, 6, 7, 8}), where □ is the empty space. • There are 16! possible states for 15-puzzle. • Not all states are directly reachable from a given state. (In fact, exactly half of them are reachable from a given state.) • How can an artificial agent represent the states and the state space for this problem?

  47. Problem Formulation • Choose an appropriate data structure to represent the world states. • Define each operator as a precondition/effects pair where the • precondition holds exactly in the states the operator applies to, • effects describe how a state changes into a successor state by the application of the operator. • Specify an initial state. • Provide a description of the goal (used to check if a reached state is a goal state).

  48. 2 8 3 Formulating the 8-puzzle Problem 1 6 4 7 5 States: each represented by a 3 × 3 array of numbers in [0 . . . 8], where value 0 is for the empty cell. 2 8 3 becomes A = 1 6 4 7 0 5

  49. Formulating the 8-puzzle Problem Let r and c be row and column, respectively; let d be an operator. Operators: 24 operators of the form Op(r,c,d) where r, c ∈ {1, 2, 3}, d ∈ {L, R, U, D}. Op(r,c,d) moves the empty space at position (r, c) in the direction d. Example: 2 8 3 2 8 3 1 6 4 1 6 4 7 0 5 0 7 5 Op(3,2,L)

  50. Preconditions and Effects • Example: Op(3,2,R) • 2 8 3 2 8 3 • 1 6 4 1 6 4 • 7 0 5 7 5 0 • Preconditions: A[3, 2] = 0 • A[3, 2] ← A[3, 3] • Effects: • A[3, 3] ← 0 • We have 24 operators in this problem formulation . . . 20 too many! Op(3,2,R)

More Related