1 / 110

Search

Search. Tamara Berg CS 590-133 Artificial Intelligence. Many slides throughout the course adapted from Dan Klein, Stuart Russell, Andrew Moore, Svetlana Lazebnik , Percy Liang, Luke Zettlemoyer. Course Information. Instructor: Tamara Berg ( tlberg@cs.unc.edu )

colm
Download Presentation

Search

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Search Tamara Berg CS 590-133 Artificial Intelligence Many slides throughout the course adapted from Dan Klein, Stuart Russell, Andrew Moore, Svetlana Lazebnik, Percy Liang, Luke Zettlemoyer

  2. Course Information Instructor: Tamara Berg (tlberg@cs.unc.edu) Course website: http://tamaraberg.com/teaching/Spring_14/ TAs: Shubham Gupta & Rohit Gupta Office Hours (Tamara): Tuesdays/Thursdays 4:45-5:45pm FB 236 
Office Hours (Shubham): Mondays 4-5pm & Friday 3-4pm SN 307 
Office Hours (Rohit): Wednesday 4-5pm & Friday 4-5pm SN 312 See website & previous slides for additional important course information.

  3. Announcements for today • Sign up for the class piazza mailing list here: • piazza.com/unc/spring2014/comp590133 • Reminder: This is a 3 credit course. If you are enrolled for 1 credit, please change to 3 credits. • HW1 will be released on the course website tonight (Shubham/Rohit will give a short overview at the end of class)

  4. Recall from last class

  5. Agents • An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators

  6. Rational agents • For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and the agent’s built-in knowledge • Performance measure (utility function): An objective criterion for success of an agent's behavior

  7. Types of agents Reflex agent Planning agent Consider how the world WOULD BE Decisions based on (hypothesized) consequences of actions Must have a model of how the world evolves in response to actions Must formulate a goal (test) • Consider how the world IS • Choose action based on current percept (and maybe memory or a model of the world’s current state) • Do not consider the future consequences of their actions

  8. Search • We will consider the problem of designing goal-based agents in fully observable, deterministic, discrete, knownenvironments Start state Goal state

  9. Search problem components Initial state • Initial state • Actions • Transition model • What state results fromperforming a given action in a given state? • Goal state • Path cost • Assume that it is a sum of nonnegative step costs • The optimal solution is the sequence of actions that gives the lowest path cost for reaching the goal Goal state

  10. Example: Romania • On vacation in Romania; currently in Arad • Flight leaves tomorrow from Bucharest • Initial state • Arad • Actions • Go from one city to another • Transition model • If you go from city A to city B, you end up in city B • Goal state • Bucharest • Path cost • Sum of edge costs (total distance traveled)

  11. State space • The initial state, actions, and transition model define the state space of the problem • The set of all states reachable from initial state by any sequence of actions • Can be represented as a directed graph where the nodes are states and links between nodes are actions

  12. Vacuum world state space graph

  13. Search • Given: • Initial state • Actions • Transition model • Goal state • Path cost • How do we find the optimal solution? • How about building the state space and then using Dijkstra’s shortest path algorithm? • Complexity of Dijkstra’s is O(E + V log V), where V is the size of the state space • The state space may be huge!

  14. Search: Basic idea • Let’s begin at the start state and expand it by making a list of all possible successor states • Maintain a frontier– the set of all leaf nodes available for expansion at any point • At each step, pick a state from the frontier to expand • Keep going until you reach a goal state or there are no more states to explore. • Try to expand as few states as possible

  15. Search tree • “What if” tree of sequences of actions and outcomes • The root node corresponds to the starting state • The children of a node correspond to the successor states of that node’s state • A path through the tree corresponds to a sequence of actions • A solution is a path ending in a goal state • Edges are labeled with actions and costs Starting state Action Successor state … … … … Goal state

  16. Tree Search Algorithm Outline • Initializethe frontier using the start state • While the frontier is not empty • Choose a frontier node to expand according to search strategyand take it off the frontier • If the node contains the goal state, return solution • Else expand the node and add its children to the frontier

  17. Tree search example Start: Arad Goal: Bucharest

  18. Tree search example Start: Arad Goal: Bucharest

  19. Tree search example Start: Arad Goal: Bucharest

  20. Tree search example Start: Arad Goal: Bucharest

  21. Tree search example Start: Arad Goal: Bucharest

  22. Tree search example Start: Arad Goal: Bucharest

  23. Tree search example Start: Arad Goal: Bucharest

  24. Handling repeated states • Initializethe frontier using the starting state • While the frontier is not empty • Choose a frontier node to expand according to search strategy and take it off the frontier • If the node contains the goal state, return solution • Else expand the node and add its children to the frontier • To handle repeated states: • Keep an explored set; which remembers every expanded node • Newly generated nodes already in the explored set or frontier can be discarded instead of added to the frontier

  25. Search without repeated states Start: Arad Goal: Bucharest

  26. Search without repeated states Start: Arad Goal: Bucharest

  27. Search without repeated states Start: Arad Goal: Bucharest

  28. Search without repeated states Start: Arad Goal: Bucharest

  29. Search without repeated states Start: Arad Goal: Bucharest

  30. Search without repeated states Start: Arad Goal: Bucharest

  31. Search without repeated states Start: Arad Goal: Bucharest

  32. Tree Search Algorithm Outline • Initializethe frontier using the starting state • While the frontier is not empty • Choose a frontier node to expand according to search strategyand take it off the frontier • If the node contains the goal state, return solution • Else expand the node and add its children to the frontier Main question: What should our search strategy be, iehow do we choose which frontier node to expand?

  33. Uninformed search strategies • A search strategy is defined by picking the order of node expansion • Uninformedsearch strategies use only the information available in the problem definition • Breadth-first search • Depth-first search • Iterative deepening search • Uniform-cost search

  34. Informed search strategies • Idea: give the algorithm “hints” about the desirability of different states • Use an evaluation functionto rank nodes and select the most promising one for expansion • Greedy best-first search • A* search

  35. Uninformed search

  36. Breadth-first search • Expand shallowest node in the frontier Example state space graph for a tiny search problem Example from P. Abbeel and D. Klein

  37. Breadth-first search • Expansion order: (S,d,e,p,b,c,e,h,r,q,a,a, h,r,p,q,f,p,q,f,q,c,G)

  38. Breadth-first search • Expansion order: (S,d,e,p,b,c,e,h,r,q,a,a, h,r,p,q,f,p,q,f,q,c,G)

  39. Breadth-first search • Expansion order: (S,d,e,p,b,c,e,h,r,q,a,a, h,r,p,q,f,p,q,f,q,c,G)

  40. Breadth-first search • Expansion order: (S,d,e,p,b,c,e,h,r,q,a,a, h,r,p,q,f,p,q,f,q,c,G)

  41. Breadth-first search • Expansion order: (S,d,e,p,b,c,e,h,r,q,a,a, h,r,p,q,f,p,q,f,q,c,G)

  42. Breadth-first search • Expansion order: (S,d,e,p,b,c,e,h,r,q,a,a, h,r,p,q,f,p,q,f,q,c,G)

  43. Breadth-first search • Expansion order: (S,d,e,p,b,c,e,h,r,q,a,a, h,r,p,q,f,p,q,f,q,c,G)

  44. Breadth-first search • Expansion order: (S,d,e,p,b,c,e,h,r,q,a,a, h,r,p,q,f,p,q,f,q,c,G)

  45. Breadth-first search • Expand shallowest node in the frontier • Implementation: frontieris a FIFO queue Example state space graph for a tiny search problem Example from P. Abbeel and D. Klein

  46. Depth-first search • Expand deepest node in the frontier

  47. Depth-first search • Expansion order: (S,d,b,a,c,a,e,h,p,q,q, r,f,c,a,G)

  48. Depth-first search • Expansion order: (S,d,b,a,c,a,e,h,p,q,q, r,f,c,a,G)

More Related