1 / 34

CS 5368: Artificial Intelligence Fall 2011

CS 5368: Artificial Intelligence Fall 2011. Search Part 1 9/1/2011. Mohan Sridharan Slides adapted from Dan Klein. General Questions. Access to textbook and course material? Python tutorial or online link? Read chapters 1, 2 (RN); chapter 2 (KF) ? Any questions?. This Lecture.

Download Presentation

CS 5368: Artificial Intelligence Fall 2011

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 5368: Artificial IntelligenceFall 2011 Search Part 1 9/1/2011 Mohan Sridharan Slides adapted from Dan Klein

  2. General Questions • Access to textbook and course material? • Python tutorial or online link? • Read chapters 1, 2 (RN); chapter 2 (KF) ? • Any questions?

  3. This Lecture • Agents that plan, i.e., look ahead. • Search problems. • Un-informed Search Methods: • Depth-First Search. • Breadth-First Search. • Uniform-Cost Search. • Heuristic Search Methods: • Greedy Search.

  4. Review: Reflex Agents • Reflex agents: • Choose action based on current percept (and maybe memory). • May have memory or model of the current state. • Do not consider future consequences of actions. • Act on how the world is. • Can a reflex agent be rational?

  5. Review: Goal Based Agents • Goal-based agents: • Ask “what if”. • Decisions based on (hypothesized) action consequences. • Need a model of how the world evolves in response to actions. • Act on an estimate of how the world “would be”.

  6. Search Problems • A search problem consists of: • State space: • Actions and successor function: • Start state, goal test, path cost. • A solution is an action sequence (a plan) that transforms start state to a goal state. “N”, 1.0 “E”, 1.0

  7. Example: Romania • State space: • Cities. • Successor function: • Go to adj city cost = dist. • Start state: • Arad. • Goal test: • Is state == Bucharest? • Solution?

  8. G a c b e d f S h p r q State Space Graphs • State space graph: A mathematical model of a search problem: • For every search problem, there is a corresponding state space graph. • The successor function is represented by arcs. • We can rarely build this graph in memory, so we do not! Ridiculously tiny search graph for a tiny search problem

  9. State Space Sizes? • Search Problem: Eat all of the food  • Pacman positions: 30 • Food count: 30 • Ghost positions: 12 • Pacman facing: up, down, left, right.

  10. Search Trees • A search tree: • This is a “what if” tree of plans and outcomes. • Start state at the root node. • Children correspond to successors. • Nodes contain states, correspond to plans to those states. • For most problems, we can never actually build the whole tree! “N”, 1.0 “E”, 1.0

  11. Search Tree Example • Search: • Expand out possible plans. • Maintain a fringe of unexpanded plans. • Try to expand as few tree nodes as possible.

  12. General Tree Search • Important ideas: • Fringe; frontier; open list. • Expansion. Exploration strategy. • Main question: which fringe nodes to explore? Pseudo-code is in Figure 3.7.

  13. G a c b e d f S h S e e p d p r q q h h r r b c p p q q f f a a q q c c G G a a State Graphs vs. Search Trees Each NODE in in the search tree is an entire PATH in the state graph. We construct both on demand – and we construct as little as possible.

  14. G a c b e d f S S e e p h d Search Tiers q h h r r b c p r q p p q q f f a a q q c c G G a a Review: Breadth First Search Strategy: expand shallowest node first Implementation: Fringe is a FIFO queue

  15. G a c b a c b e d f e d f S h S h e e p d p r p q q q h h r r b c p p q q f f a a q q c c G G a a Review: Depth First Search Strategy: expand deepest node first Implementation: Fringe is a LIFO stack r

  16. Search Algorithm Properties • Complete: guaranteed to find a solution if one exists? • Optimal: guaranteed to find the least cost path? • Time complexity: how long to find a solution? • Space complexity: how much memory for search? Variables:

  17. BFS Again… • Optimal only if path cost is a non-decreasing function of node depth. • Solution at (shallowest) depth s: time complexity O(bs+1). • Every generated node remains in memory: space complexity O(bs). • Exponential complexity is scary! See Figure 3.13  • Memory requirements bigger problem than execution time!

  18. DFS Again… • Infinite paths make DFS incomplete, LIFO queue. • How can we fix this? Solution: cycle/path checking. No No Infinite Infinite b START a GOAL

  19. DFS • With cycle checking, DFS is complete.* • Time complexity: O(bm+1) • Why not just do BFS then? • When is DFS optimal? 1 node b b nodes … b2 nodes m tiers bm nodes * Or graph search – later.

  20. DFS • Pseudo-code in Figure 3.16. • Once a node has been expanded and its descendants explored, it can be removed from memory! • Space complexity is O(bm), much smallerthan BFS. • The main benefit is the space complexity. • More sophistication: • Backtracking search (Chapter 6). • Depth-limited search (Figure 3.17 in Section 3.4.4). Yes No O(bm+1) O(bm)

  21. BFS vs. DFS Yes No O(bm+1) O(bm) Yes No* O(bs+1) O(bs) 1 node b b nodes … s tiers b2 nodes bs nodes bm nodes

  22. Comparisons • When will BFS outperform DFS? • When will DFS outperform BFS?

  23. Iterative Deepening b Iterative deepening uses depth-limited search as a subroutine (Figure 3.18): • Do a DFS which only searches for paths of length 1 or less. • If “1” failed, do a DFS which only searches paths of length 2 or less. • If “2” failed, do a DFS which only searches paths of length 3 or less. ….and so on. … Y N O(bm+1) O(bm) Y N* O(bs+1) O(bs) Y N* O(bs+1) O(bs)

  24. Iterative Deepening • Time complexity for depth d: O(bd). • Asymptotically similar to BFS. • N(IDS) = (d)b + (d-1)b2 + … + (1) bd • Can be combined with BFS as well. • Best choice among uninformed search methods!

  25. GOAL a 2 2 c b 3 2 1 8 e 2 d 3 f 9 8 2 START h 4 1 1 4 15 p r q Costs on Actions BFS finds the shortest path in terms of number of transitions. It does not find the least-cost path.

  26. Uniform-cost Search • Expand nodes in the order of optimal path cost. • Shallowest goal state is optimal solution. • Optimal for any step-cost function! • Pseudo-code in Figure 3.14.

  27. G a c b e d f S h S e e p d p r q q h h r r b c p p q q f f a a q q c c G G a a Uniform Cost Search 2 8 1 Expand cheapest node first: Fringe is a priority queue 2 2 3 1 9 8 1 1 15 0 1 9 3 16 11 5 17 4 11 Cost contours 7 6 13 8 11 10

  28. Priority Queue Refresher • A priority queue is a data structure in which you can insert and retrieve (key, value) pairs with the following operations: • You can decrease a key’s priority by pushing it again. • Unlike a regular queue, insertions are not constant time, usually O(log n). • We will need priority queues for cost-sensitive search methods.

  29. C*/ tiers DFS, BFS and UCS • See Figure 3.21 for comparison of all uninformed search strategies. O(bm) O(bm+1) Y N O(bs) O(bs+1) Y N O(bC*/) O(bC*/) Y* Y b … * UCS can fail if actions can get arbitrarily cheap

  30. Uniform Cost Issues • Remember: explores increasing cost contours • The good: UCS is complete and optimal! • The bad: • Explores options in every “direction”. • No information about goal location. c  1 … c  2 c  3 Start Goal

  31. Search Heuristics • Any estimate of how close a state is to a goal. • Designed for a particular search problem. • Examples: Manhattan distance, Euclidean distance. 10 5 11.2

  32. Heuristics

  33. Best First / Greedy Search • Expand the node that seems closest… • What can go wrong?

  34. Best First / Greedy Search b • A common case: • Best-first takes you straight to the (wrong) goal. • Worst-case: like a badly-guided DFS in the worst case. • Can explore everything. • Can get stuck in loops without cycle checking. • Like DFS in completeness (finite states w/ cycle checking). … b …

More Related