1 / 22

Introduction to Artificial Intelligence CS 438 Spring 2008

Introduction to Artificial Intelligence CS 438 Spring 2008. Today AIMA, Ch. 4 Heuristics & Iterative Improvement Algorithms Next Week AIMA, Ch. 5 Constraint Problem solving. Man vs. Machine: The Cubinator. Heuristic Function.

ziven
Download Presentation

Introduction to Artificial Intelligence CS 438 Spring 2008

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Artificial IntelligenceCS 438 Spring 2008 • Today • AIMA, Ch. 4 • Heuristics & Iterative Improvement Algorithms • Next Week • AIMA, Ch. 5 • Constraint Problem solving Man vs. Machine: The Cubinator

  2. Heuristic Function • Recognizing desirable patterns of a problem and translating those into a numeric scale. • What are some desirable patterns for the Peg Solitaire?

  3. Example: 8-tile Puzzle • The average solution cost for a randomly generated puzzle is 22 steps • With an average branching factor of 3 this results in approximately 3.1 * 1010 states. • With DPP it is closer to 170,000 • For a 15-tile puzzle the number is 1013

  4. Admissible heuristics The 8-tile puzzle: • h1(n) = number of misplaced tiles • h2(n) = total Manhattan distance (i.e., no. of squares from desired location of each tile) • h1(S) = ? • h2(S) = ?

  5. Admissible heuristics The 8-tile puzzle: • h1(n) = number of misplaced tiles • h2(n) = total Manhattan distance (i.e., no. of squares from desired location of each tile) • h1(S) = ? 8 • h2(S) = ? 3+1+2+2+2+3+3+2 = 18

  6. Effect of the heuristic on performance • A good heuristic will expand fewer nodes • The closer the heuristic estimate the actual cost the better it will focus the search. • Dominance • If h2(n) ≥ h1(n) for all n (both admissible) then h2dominatesh1 • h1will expand as many or more nodes than h2

  7. Example: 8-tile Puzzle

  8. Admissible Heuristics for Pathing Straight Line Distance h(A) = sqrt((A.x-goal.x)^2 + (A.y-goal.y)^2) Manhattan Distance h(A) = (abs(A.x-goal.x) + abs(A.y-goal.y)) Diagonal Distance h(A) = max(abs(A.x-goal.x), abs(A.y-goal.y)) Use a weighting factor to estimate the cost of traversing difficult terrain. A* Demo

  9. Inventing heuristics • Looked at a relaxed version of the problem • A problem with fewer restrictions on the actions is called a relaxed problem • The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem • If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then h1(n) gives an exact cost to the solution. • If the rules are relaxed so that a tile can move to any adjacent square, then h2(n) gives an exact cost to the solution • Look at a lot of solutions to find desirable patterns • Weighted liner functions (ch. 6)

  10. Local search algorithms • AKA, Iterative improvement algorithms. • Start with a complete state and make modifications to improve its quality • For problems in which the final state IS the solution • The path to get there is irrelevant • In such cases, we can use local search algorithms • keep a single "current" state, try to improve it

  11. Example: n-queens • Put n queens on an n × n board with no two queens on the same row, column, or diagonal

  12. Hill-climbing search • "Like climbing Everest in thick fog with amnesia“ • For optimization problems use an objective function (looking for the highest value state • For minimization problems (cost) use heuristic function

  13. Hill-climbing search • Problem: depending on initial state, can get stuck in local maxima

  14. Example: Tower of Babble • A block stacking agent is asked to place blocks in order with block “A” on the bottom. • obj(n) = add one for each block that is resting on the thing it is suppose to be on, and subtract one for each block that is sitting on the wrong thing.

  15. Example: Tower of Babble

  16. Example: Tower of Babble • New objective function • obj(n) = for each block with the correct support structure, add one for every block in the structure, and for each block with the incorrect support structure subtract one for every block in the structure.

  17. Example: Tower of Babble

  18. Example: Tower of Babble • Objective function captures the idea that incorrect structures are bad and should be disassembled, and correct structures are good and should be built-up

  19. Local beam search • Keep track of k states rather than just one • Start with k randomly generated states • At each iteration, all the successors of all k states are generated • If any one is a goal state, stop; else select the k best successors from the complete list and repeat.

  20. Genetic algorithms • A successor state is generated by combining two parent states • Start with k randomly generated states (population) • A state is represented as a string over a finite alphabet (often a string of 0s and 1s) • Evaluation function (fitness function). Higher values for better states. • Produce the next generation of states by selection, crossover, and mutation

  21. Genetic algorithms: 8-queens

  22. Genetic algorithms: 8-queens • Fitness function: number of non-attacking pairs of queens (min = 0, max = 8 × 7/2 = 28) • 24/(24+23+20+11) = 31% • 23/(24+23+20+11) = 29% etc

More Related