1 / 24

CHAPTER 3 SOLVING PROBLEM BY SEARCHING

CHAPTER 3 SOLVING PROBLEM BY SEARCHING. 지 애 띠. PROBLEM SOLVING AGENT. An agent is anything that an be viewed as perceiving its environment through sensors and acting upon that environment through actuators.

campana
Download Presentation

CHAPTER 3 SOLVING PROBLEM BY SEARCHING

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CHAPTER 3SOLVING PROBLEM BY SEARCHING 지 애 띠

  2. PROBLEM SOLVING AGENT • An agentis anything that an be viewed as perceiving its environment through sensors and acting upon that environment through actuators. • A problem-solving agentis one kind of the goal-based agent that decides what to do by finding sequences of actions that lead to desirable states.

  3. PROBLEM SOLVING AGENT • Goal formulation • Problem formulation - states - actions • Find solution - search - Execution

  4. PROBLEM SOLVING AGENT Function SIMPLE-PROBLEM-SOLVING-AGENT(percept) returns an action inputs: percept, a percept static: seq, an action sequence, initially empty state, some description of the current world state goal, a goal, initially null problem, a problem formulation states UPDATE-STATE(state, percept) ifseq is empty then do goal  FORMULATE-GOAL(state) problem  FORMULATE-PROBLEM(state, goal) seq  SEARCH(problem) action  FIRST(seq) seq  REST(seq) returnaction

  5. PROBLEM SOLVING AGENT Well-defined problem and solutions A problem can be defined formally by four component: - The initial state that the agent start in - A description of the possible actions available to the agent. • using successor function : Given a particular state x, SUCCESSOR-FN(x) returns a set of <action, successor>. • The initial state and successor function define the state space of the problem. • A path is a sequence of states connected by a sequence of actions. - The goal test, which determines whether a given state is a goal. - A path cost function that assigns a numeric cost to each path. • step cost : taking action a to go from state x to state y is donated by c(x, a, y).

  6. PROBLEM SOLVING AGENT Well-defined problem and solutions - A solution is a path from the initial state to a goal state. - An optimal solution has the lowest path cost among all solutions.

  7. SEARCHING FOR SOLUTION Measuring problem-solving performance • Completeness : Is the algorithm guaranteed to find a solution when there is one? • Optimality : Dose the strategy find the optimal solution? • Time complexity : How long dose it take to find a solution? • Space complexity : How much memory is needed to perform the search?

  8. SEARCHING FOR SOLUTION function TREE-SEARCH(problem, fringe) returns a solution, or failure fringe INSERT(MAKE-NODE(INITIAL-STATE[problem]), fringe) loop do if EMPTY?(fringe) then return failure node  REMOVE-FIRST(fringe) if GOAL-TEST[problem] applied to STATE[node] succeeds then return SOLUTION(node) fringe  INSERT-ALL(EXPAND(node, problem), fringe)

  9. SEARCHING FOR SOLUTION function EXPAND(node, problem) returns a set of nodes successors the empty set for each <action, result> in SUCCESSOR-FN[problem](STATE[node])do s  a new NODE STATE[s]  result PARENT-NODE[s]  node ACTION[s]  action PATH-COST[s]  PATH-COST[node] + STEP-COST(node, action, s) DEPTH[s]  DEPTH[node] + 1 add s to successors returnsuccessors

  10. UNINFORMED SEARCH STRATEGIES Uninformed search - no additional information about states beyond that provided in the problem definition. - generate successors and distinguish a goal state from a nongoal state. Informed search (Heuristic search) - strategies that know whether one nongoal state is “more promising” than another

  11. UNINFORMED SEARCH STRATEGIES Breadth-first search - the root node is expanded first, then all the successors of the root node are expanded next, then their successor, and so on. - TREE-SEARCH(problem, FIFO-QUEUE()) - complete - the shallowest goal node is not necessarily the optimal one; optimal, if the path cost is a nondecreasing function of the depth of the node. - time & space complexity b + b2 + b3 + … + bd + (bd+1 – b) = O(bd+1)

  12. UNINFORMED SEARCH STRATEGIES Uniform-cost search - instead of expanding the shallowest node, expands the node with the lowest path cost. - not complete; if a node that has a zero-cost action leading back to the same state expands, it will get stuck in an infinite loop - optimal ; the first goal is the optimal one - complexity ; O(b[C*/є]) It can explore large trees of small steps before exploring paths involving large and perhaps useful steps. When all step costs are equal, O(b[C*/є]) is just bd

  13. UNINFORMED SEARCH STRATEGIES Depth-first search - always expands deepest node in the current fringe - using stack or recursive function - not complete, not optimal ; when the better goal is near the root, it can make wrong choice and get stuck going down a very long (or even infinite) path it generates O(bm) nodes; if the tree is unbounded, it is infinite. - very modest memory requirements ; it need to store only bm + 1 nodes. - backtracking search ; O(m) ; modifying current state description, keep only one solution in the memory

  14. UNINFORMED SEARCH STRATEGIES Depth-limited search - when the tree is unbounded, supplying depth-first search with a predetermined depth limit - not complete; if l < d, the shallowest goal is beyond the depth limit - not optimal; if l > d - time complexity; O(bl) - space complexity; O(bl) - For most problem, we will not know a good depth limit until we have solved the problem. - failure or cutoff

  15. UNINFORMED SEARCH STRATEGIES function DEPTH-LIMITED-SEARCH(problem, limit) returns a solution, or failure/cutoff return RECURSIVE-DLS(MAKE-NODE(INITIAL-STATE[problem]), problem, limit) function RECURSIVE-DLS(node, problem, limit) returns a solution, or failure/cutoff cutoff_occurred? false if GOAL-TEST[problem](STATE[node]) then return SOLUTION(node) else if DEPTH[node] = limitthen returncutoff else for eachsuccessor in EXPAND(node, problem) do result  RECURSIVE-DLS(successor, problem, limit) ifresult = cutoffthencutoff_occurred?  true else if result ≠ failurethenreturnresult ifcutoff_occurred?then returncutoffelse returnfailure

  16. UNINFORMED SEARCH STRATEGIES Iterative deepening depth-first search - It finds the best depth limit by gradually increasing the limits. - combination the benefits of depth-first search and breadth-first search. • complete; when the branching factor is finite. • optimal; when the path cost is a nonincreasing function of the depth. • memory requirements are very modest; O(bd) - time complexity; O(bd) states are generated multiple times but not waste. N(IDS) = (d)b + (d – 1)b2 + … + (1)bd cf. N(BFS) = b + b2 + … + bd + (bd+1 – b)

  17. UNINFORMED SEARCH STRATEGIES function ITERATIVE-DEEPENING-SEARCH(problem) returns a solution, or failure inputs: problem, a problem fordepth 0 to ∞ do result  DEPTH-LIMITED-SEARCH (problem, depth) ifresult ≠ cutoff thenreturnresult

  18. UNINFORMED SEARCH STRATEGIES Iterative lengthening search - using increasing path-cost limits instead of increasing depth limits. - for optimality - substantial overhead compared to uniform-cost search.

  19. UNINFORMED SEARCH STRATEGIES Bidirectional search - to run two simultaneous searches ; one forward from the initial state and the other backward from the goal, stopping when the two searches meet in the middle. - one or both of the searches check each node before it is expanded. - time complexity; O(bd/s); checking can be done in constant time with a hash table. - space complexity; O(bd/s); at least one of the search trees must be kept in memory. - completeness & optimality; depends on combinations

  20. UNINFORMED SEARCH STRATEGIES Comparing uninformed search strategies

  21. AVOIDING REPEATED STATES Graph Search • Using the closed list, which stores every expanded node • If the current node matches a node on the closed list, it is discarded instead of being expanded. • If the newly discovered path if shorter than the original one, it could miss an optimal solution.

  22. AVOIDING REPEATED STATES function GRAPH-SEARCH(problem, fringe) returns a solution, or failure closed an empty set fringe  INSERT(MAKE-NODE(INITIAL-STATE[problem]), fringe) loop do if EMPTY?(fringe) thenreturn failure node  REMOVE-FIRST(fringe) if GOAL-TEST[problem](STATE[node]) thenreturn SOLUTION(node) if STATE[node] is not in closedthen add STATE[node] to closed fringe  INSERT-ALL(EXPAND(node, problem), fringe)

  23. SEARCHING WITH PARTIAL INFORMATION Sensorless problem - it could be one of the several initial states, and each action might lead to one of the several possible successor. - when the world is not fully observable, the agent must reason about sets of states that it might get to; belief states - An action is applied to a belief state by unioning the results of applying the action to each physical state in a belief state. - A path that leads to a belief state, all of those members are goal states.

  24. SEARCHING WITH PARTIAL INFORMATION Contingency problems - The agent can obtain new information from its sensor after acting, the agent faces a contingency problem. - The solution often takes the form of a tree, each branch may be selected depending on the percepts received up to that point in the tree. Exploration problem - When the states and actions of the environment are unknown, the agent must act to discover them. Extreme case of contingency problem.

More Related