1 / 19

ITCS 3153 Artificial Intelligence

ITCS 3153 Artificial Intelligence. Lecture 4 Uninformed Searches (cont). Cost of Breadth-first Search (BFS). b – max branching factor (infinite?) d – depth to shallowest goal node m – max length of any path (infinite?) Analysis of space and time performance… g(n) is O(f(n)) means that

christopher
Download Presentation

ITCS 3153 Artificial Intelligence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ITCS 3153Artificial Intelligence Lecture 4 Uninformed Searches (cont)

  2. Cost of Breadth-first Search (BFS) • b – max branching factor (infinite?) • d – depth to shallowest goal node • m – max length of any path (infinite?) • Analysis of space and time performance… • g(n) is O(f(n)) means that there are positive constants c and n0 such that 0  g(n)  cf(n) for all n  n0

  3. BFS Space Requirements • What’s important is the path, not just the existence of the goal in the tree. • We must be able to reconstruct the path to the goal. • We’ve got to store the tree for reconstruction.

  4. Cost of Depth-first Search • time complexity = O(bm) • length of longest path anchors upper bound • space complexity = O(bm) • What’s an example that highlights difference in performance between BFS and DFS?

  5. Iterative Deepening • Essentially DFS with a depth limit • Why? • Remember the space complexity, O(bm) and time complexity, O(bm) of DFS? • So... limit m to be small • But what if m is smaller than d? • we’ll never find solution • So… increment depth limit starting from 0

  6. Bidirectional Search • Search from goal to start • Search from start to goal

  7. Bidirectional Search • Do you always know the predecessors of a goal state? • Do you always know the goal state? • 8-puzzle • Path planning • Chess

  8. Avoid Repeated States • How might you create repeated states in 8-puzzle? • How can you detect repeated states? • What about preserving lowest cost path among repeated states? • uniform-cost search (shortest path) and BFS w/ constant step costs

  9. Interesting problems • Exercise 3.9: • 3 cannibals and 3 missionaries and a boat that can hold one or two people are on one side of the river. Get everyone across the river. • 8-puzzle and 15-puzzle, invented by Sam Loyd in 1870s. Think about search space. • Rubik’s cube • Traveling Salesman Problem (TSP)

  10. 3 Cannibals and 3 Missionaries • Three cannibals and three missionaries must cross a river. Their boat can only hold two people. • If the cannibals outnumber the missionaries, on either side of the river, the missionaries are in trouble (I won't describe the results). Each missionary and each cannibal can row the boat. How can all six get across the river?

  11. Farmer, Fox, Goose, and Bag of Corn • Puzzle concerning a farmer, a fox, a goose, and a bag of corn. • The farmer wants to get all of this stuff across a river. His boat will only hold himself, and one of the other three items. The fox will eat the goose if the farmer is not present, and the goose will eat the corn if the farmer is not present. How can the farmer transport all three items across the river?

  12. Chapter 4 – Informed Search • INFORMED? • Uses problem-specific knowledge beyond the definition of the problem itself • selecting best lane in traffic • playing a sport • what’s the heuristic (or evaluation function)? www.curling.ca

  13. Best-first Search • Use an evaluation function to select node to expand • f(n) = evaluation function = expected distance to goal • select the node that minimizes f(n) • but if we knew the ‘best’ node to explore it wouldn’t be search!!! • Let’s use heuristics for our evaluation functions

  14. Heuristics • A function, h(n), that estimates cost of cheapest path from node n to the goal • h(n) = 0 if n == goal node

  15. Greedy Best-first Search • Trust your heuristic • evaluate node that minimizes h(n) • f(n) = h(n) • Example: getting from A to B • Explore nodes with shortest straight distance to B • Shortcomings of heuristic? • slow route (traffic), long route, dead end

  16. A* (A-star) Search • Combine two costs • f(n) = g(n) + h(n) • g(n) = cost to get to n • h(n) = cost to get to goal from n • Minimize f(n)

  17. A* is Optimal? • A* can be optimal if h(n) satisfies conditions • h(n) never overestimates cost to reach the goal • admissible heurisitic • h(n) is optimistic • f(n) never overestimates cost of a solution through n • Proof of optimality?

  18. A* is Optimal • We must prove that A* will not return a suboptimal goal or a suboptimal path to a goal • Let G be a suboptimal goal node • f(G) = g(G) + h(G) • h(G) = 0 because G is a goal node • f(G) = g(G) > C* (because G is suboptimal)

  19. A* is Optimal • We must prove that A* will not return a suboptimal goal or a suboptimal path to a goal • Let G be a suboptimal goal node • f(G) = g(G) + h(G) • h(G) = 0 because G is a goal node • f(G) = g(G) > C* (because G is suboptimal) • Let n be a node on the optimal path • because h(n) does not overestimate • f(n) = g(n) + h(n) <= C* • Therefore f(n) <= C* < f(G) • node n will be selected before node G

More Related