1 / 30

CS 236501 Introduction to AI

CS 236501 Introduction to AI. Tutorial 2 Heuristic Search. Blind vs. Heuristic Search. Blind Search Do not use any domain specific knowledge. Requires: initial state, operators and a goal predicate. Informed (Heuristic) Search

dkopp
Download Presentation

CS 236501 Introduction to AI

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 236501Introduction to AI Tutorial 2 Heuristic Search

  2. Blind vs. Heuristic Search • Blind Search • Do not use any domain specific knowledge. • Requires: initial state, operators and a goal predicate. • Informed (Heuristic) Search • Requires in addition a function for evaluating states. This function, called also a heuristic function, estimates the cost of the optimal path from a state to a goal state. • Can improve the search in two ways: • Leading the algorithm towards a goal state • Pruning off branches that do not lie on any (optimal) solution paths. Intro. to AI – Tutorial 1 – By Saher Esmeir & Nela Gurevich

  3. Problem Definition • Additional knowledge is added to problem definition • Problem • Initial state (InitState) • Successor function (Succ) • Goal predicate (Goal-p) • Heuristic Function (h, h: states -> scores) • The heuristic function estimates the distance of a state to the goal state • Lower heuristic value means we are closer to a solution Intro. to AI – Tutorial 1 – By Saher Esmeir & Nela Gurevich

  4. Example – Binary Tree • The problem is to find the number 12 in a binary tree • A state is represented by a number • Initial State: 1 • Successor Function: Succ(x) = {2*x, 2*x + 1} • Goal predicate: Goal-p(x) == true iff x == 12 • Possible Heuristic function: h(x) = |x – 12| Intro. to AI – Tutorial 1 – By Saher Esmeir & Nela Gurevich

  5. General Search Routine • SearchFunction (States, Problem, Combiner) • { • If States == {} • return ‘Search Failed’ • CurrState <- Get and Remove first state in States • if (Problem.Goal-p(CurrState) == true) • return CurrState //or any other solution form • else • { • successors <- Problem.Succ(CurrState) • States <-Combiner(successors,States) • SearchFunction(States, Problem, Combiner) • } • } Intro. to AI – Tutorial 1 – By Saher Esmeir & Nela Gurevich

  6. Priority Combiner • Priority-Combiner(states1, states2) • { • Result <- {} • For s in states1 • Result.Insert(s, h(s)) • For s in states2 • Result.Insert(s, h(s)) • } • Priority-Combiner is aware of the heuristic function that characterizes the problem • Insert() inserts a state to a sorted list according to the value associated with the state Intro. to AI – Tutorial 1 – By Saher Esmeir & Nela Gurevich

  7. Best First Search • Priority-Combiner <- Problem.h • States <- Problem.InitState • Solution = SearchFunction(States, Problem, • Priority-Combiner) • States <-(1) • States <-(3 2) • States <-(7 6 2) • States <-(14 15 6 2) • States <-(15 6 2 28 29) • States <-(6 2 28 29 30 31) • States <-(12 13 2 28 29 30 31) • 12 is found! Intro. to AI – Tutorial 1 – By Saher Esmeir & Nela Gurevich

  8. Beam Search • Beam-Priority-Combiner(states1, states2) • { • Result <- {} • For s in states1 • Result.Insert(s, h(s)) • For s in states2 • Result.Insert(s, h(s)) • If |Result| > BeamWidth • Result <- First BeamWidth examples from Result • } • BeamWidth is a parameter of Beam-Priority-Combiner Intro. to AI – Tutorial 1 – By Saher Esmeir & Nela Gurevich

  9. Beam Search (cont.) • Beam-Priority-Combiner <- Problem.h • Beam-Priority-Combiner.BeamWidth <- 2 • States <- Problem.InitState • Solution = SearchFunction(States, Problem, • Beam-Priority-Combiner) • Is beam search complete? Intro. to AI – Tutorial 1 – By Saher Esmeir & Nela Gurevich

  10. Beam Search - Example • States <-(1) • States <-(3 2) • States <-(7 6) • States <-(14 15) // 6 is out • States <-(15 28) • States <-(28 30) • ... 1 2 3 6 7 14 15 28 30 56 57 • How can we solve this problem? Intro. to AI – Tutorial 1 – By Saher Esmeir & Nela Gurevich

  11. Iterative Widening Beam Search • Iter-Wide-Search(Problem, MaxWidth) • { • Beam-Priority-Combiner <- Problem.h • for i from 1 to MaxWidth • Beam-Priority-Combiner.BeamWidth <- i • States <- Problem.InitState • Solution = SearchFunction(States, Problem, • Beam-Priority-Combiner) • If Solution != {} • return Solution • } Intro. to AI – Tutorial 1 – By Saher Esmeir & Nela Gurevich

  12. Iterative Widening Beam Search- Example • States <-(1) Beam Width = 2 • States <-(3 2) • States <-(7 6) • States <-(14 15) • ...(additional limit necessary to prevent infinite search) • ----------------- • States <-(1) Beam Width = 3 • States <-(3 2) • States <-(7 6 2) • States <-(14 15 6) • States <-(15 6 28) • States <-(6 28 30) • States <-(12 13 28) • 12 is found! Intro. to AI – Tutorial 1 – By Saher Esmeir & Nela Gurevich

  13. Local search • Keep one state in memory • At each step move to one of the neighbors of a state • Heuristic function is used to choose the next step • Used in search spaces with high branching factors, in optimization problems, or when the solution path is not required (just the goal state) Intro. to AI – Tutorial 1 – By Saher Esmeir & Nela Gurevich

  14. Local Search • LocalSearch(state, NextStepFunction) • { • if Goal-p(state) • return state • else • { • nextState <- NextStepFunction(state) • if nextState == NIL • return ‘Failure’ • else • LocalSearch(nextState, NextStepFunction) • } • } Intro. to AI – Tutorial 1 – By Saher Esmeir & Nela Gurevich

  15. Steepest Ascent Hill-Climbing • BestSuccessor(state) • { • Candidates <- {} • bestH <- infinity • succ <- Problem.Succ(state) • for s in succ • { • if h(s) < bestH • bestH <- h(s), Candidates <- {s} • if h(s) == bestH • add s to Candidates • } • if bestH < h(state) • return a random member in Candidates • else // no better successor • return NIL • } Intro. to AI – Tutorial 1 – By Saher Esmeir & Nela Gurevich

  16. Steepest Acsent Hill-Climbing • state <- Problem.InitState • Solution = LocalSearch(state, BestSuccessor) • * Failure indicates local minimum Intro. to AI – Tutorial 1 – By Saher Esmeir & Nela Gurevich

  17. Steepest Ascent Hill-Climbingwith sideways steps • BestOrEqual_Successor(state) • { • Same as BestSuccessor, but allows returning a successor with • heuristic value equal to h(state) • } • state <- Problem.InitState • Solution = LocalSearch(state, BestOrEqual_Successor) Intro. to AI – Tutorial 1 – By Saher Esmeir & Nela Gurevich

  18. Hill-Climbing: more variations • Steepest Ascent Hill-Climbing with local minimum avoidance • When in a local minimum, make k random steps, and continue hill-climbing • Stochastic Hill-Climbing • Among better successors, choose one at random with probability proportional to the improvement • First-Choice Hill-Climbing • Choose the first successor that improves the heuristic value • Restart Hill-Climbing • When in a local minima, restart the search with a random node Intro. to AI – Tutorial 1 – By Saher Esmeir & Nela Gurevich

  19. Heuristic Function • An accurate heuristic function will help to guide the heuristic search quickly to the solution • In rare cases we can define a heuristic function that is exact • In most cases we try to define an accurate estimation • Considerations when choosing a heuristic function: • Quality • Computational efficiency Intro. to AI – Tutorial 1 – By Saher Esmeir & Nela Gurevich

  20. Example: LightzOut • http://www.gamesgnome.com/logicpuzzles/lights/ • http://games.hughesclan.com/lights/ • http://javaboutique.internet.com/LightzOut/ • 5x5 board • Each cell can be “On” or “Off” • Click on a cell inverts the states of its 4 neighbors • The goal is to turn off the lights Intro. to AI – Tutorial 1 – By Saher Esmeir & Nela Gurevich

  21. Example: LightzOut Click on the central cell Goal: Intro. to AI – Tutorial 1 – By Saher Esmeir & Nela Gurevich

  22. LightzOut – solution example Intro. to AI – Tutorial 1 – By Saher Esmeir & Nela Gurevich

  23. LightzOut: Heuristic Function 1 • H(s) = the number of cells that are “On” • Motivation: the less lights there are “On”, the closer we are to the solution • Advantages: • Easy to implement • Fast (when using some optimization tricks) • Disadvantages: • Misleading, sometimes we need to pass through a “worse” state than the current state in order to solve the puzzle • Does not estimate the correct distance to the goal Intro. to AI – Tutorial 1 – By Saher Esmeir & Nela Gurevich

  24. LightzOut: Heuristic Function 1 h = 3 h = 5 h = 5 Disadvantage #1: Here, first step to the solution goes through a worse state Disadvantage #2: High heuristic value, although one step from the solution Intro. to AI – Tutorial 1 – By Saher Esmeir & Nela Gurevich

  25. LightzOut: Heuristic Function 2 • H(s) = the sum of Manhattan distances of each “On” cell from its nearest neighbor that is “On” • H(board with one cell on) = some arbitrary number (e.g., 10) • Motivation: adjacent “On” cells are easier to solve h = 5 h = 8 Intro. to AI – Tutorial 1 – By Saher Esmeir & Nela Gurevich

  26. LightzOut: Heuristic Function 2 • Advantages: • Gives advantage to boards with adjacent cells, which are indeed easier to solve • Disadvantages • Time complexity is O(N4), very heavy • Does not distinguish between an easy-to-solve (cross) and hard-to-solve (almost cross) adjacent groups • Still not exact Intro. to AI – Tutorial 1 – By Saher Esmeir & Nela Gurevich

  27. Example: LightzOut • Which search algorithm should we use? • Note: the suggested heuristics are not exact and sometimes misleading • Best-First Search has high memory requirements • Beam search is a possibility • For low beam width may fail to find a solution • Higher beam width improves the chances to solve Intro. to AI – Tutorial 1 – By Saher Esmeir & Nela Gurevich

  28. Example: LightzOut • Steepest-Ascent Hill-Climbing is very likely to get stuck in a local minima • Example: Heuristics #1 for the first board in slide 22 • Hill-Climbing with sideways steps, Stochastic Hill-Climbing and First-Choice Hill-Climbing may have the same problem Intro. to AI – Tutorial 1 – By Saher Esmeir & Nela Gurevich

  29. Example: LightzOut • Hill-Climbing with local minimum avoidance is a good option • Local minima will not terminate the search, and random steps might lead to a state close to the solution • Will find long solutions (if we are interested in a solution path) • Restart Hill-Climbing is not possible • No way to generate a random state Intro. to AI – Tutorial 1 – By Saher Esmeir & Nela Gurevich

  30. Summary • Heuristic searches are guided by the knowledge of the problem domain • Good heuristic function is important for an efficient heuristic search • It is hard to choose the “best” heuristic search that suits a problem. Sometimes experiments help us to understand an algorithm behavior on a certain problem Intro. to AI – Tutorial 1 – By Saher Esmeir & Nela Gurevich

More Related