1 / 70

Artificial Intelligence

Artificial Intelligence. Search Problem (2). Heuristic Search. Heuristics means choosing branches in a state space (when no exact solution available as in medical diagnostic or computational cost very high as in chess) that are most likely to be acceptable problem solution. Informed search.

Download Presentation

Artificial Intelligence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Artificial Intelligence Search Problem (2)

  2. Heuristic Search Heuristics means choosing branches in a state space (when no exact solution available as in medical diagnostic or computational cost very high as in chess) that are most likely to be acceptable problem solution.

  3. Informed search • So far, have assumed that no nongoal state looks better than another • Unrealistic • Even without knowing the road structure, some locations seem closer to the goal than others • Some states of the 8s puzzle seem closer to the goal than others • Makes sense to expand closer-seeming nodes first

  4. Heuristic Merriam-Webster's Online Dictionary Heuristic (pron. \hyu-’ris-tik\): adj. [from Greek heuriskein to discover.] involving or serving as an aid to learning, discovery, or problem-solving by experimental and especially trial-and-error methods The Free On-line Dictionary of Computing (15Feb98) heuristic 1. <programming> A rule of thumb, simplification or educated guess that reduces or limits the search for solutions in domains that are difficult and poorly understood. Unlike algorithms, heuristics do not guarantee feasible solutions and are often used with no theoretical guarantee. 2. <algorithm> approximation algorithm.

  5. A heuristic function • Let evaluation function h(n) (heuristic) • h(n)= estimated cost of the cheapest path from node n to goal node. • If n is goal thenh(n)=0

  6. G 3 C F 4 4 B 5 E 4 2 Define f(T) = the straight-line distance from T to G 5 D 10.4 6.7 A B C 4 A The estimate can be wrong! 11 4 S 3 G 8.9 S F 6.9 3 D E Examples (1): • Imagine the problem of finding a route on a road map and that the NET below is the road map:

  7. D E B C 75 A 150 125 50 100 60 75 80 75 80

  8. D E B C 75 A 150 125 50 100 60 75 80 75 80  50

  9. D E B C 75 A 150 125 50 100 60 75 80 75 80  125

  10. D E B C 75 A 150 125 50 100 60 75 80 75 80  200

  11. D E B C 75 A 150 125 50 100 60 75 80 75 80  300

  12. D E B C 75 A 150 125 50 100 60 75 80 75 80  450

  13. A D E B C 75 150 125 50 100 60 75 80 75 80  380

  14. London Heuristic Functions • Estimate of path cost h • From state to nearest solution • h(state) >= 0 • h(solution) = 0 • Example: straight line distance • As the crow flies in route finding • Where does h come from? • maths, introspection, inspection or programs (e.g. ABSOLVER) Liverpool Leeds 135 Nottingham 155 75 Peterborough 120

  15. Example: Romania

  16. f2 f1 = 4 = 4 1 1 3 3 2 2 • f2(T) = number or incorrectly placed tiles on board: • gives (rough!) estimate of how far we are from goal 8 8 4 4 5 5 6 6 7 7 Examples (2): 8-puzzle • f1(T) = the number correctly placed tiles on the board: Most often, ‘distance to goal’ heuristics are more useful !

  17. f2 1 3 2 8 4 5 6 7 Examples (3):Manhattan distance • f3(T) = the sum of ( the horizontal + vertical distance that each tile is away from its final destination): • gives a better estimate of distance from the goal node = 1 + 1 + 2 + 2 = 6

  18. = v( ) + v( ) + v( ) + v( ) - v( ) - v( ) Examples (4):Chess: • F(T) = (Value count of black pieces) - (Value count of white pieces) f

  19. Heuristic Evaluation Function • It evaluate the performance of the different heuristics for solving the problem. f(n) = g(n) + h(n) • Where f is the evaluation function • G(n) the actual length of the path from state n to start state • H(n) estimate the distance from state n to the goal

  20. Search Methods • Best-first search • Greedy best-first search • A* search • Hill-climbing search • Genetic algorithms

  21. Best-First Search • Evaluation function f gives cost for each state • Choose state with smallest f(state) (‘the best’) • Agenda: f decides where new states are put • Graph: f decides which node to expand next • Many different strategies depending on f • For uniform-cost search f = path cost • greedy best-first search • A* search

  22. Greedy best-first search • Evaluation function f(n) = h(n) (heuristic) = estimate of cost from n to goal • Ignores the path cost • Greedy best-first search expands the node that appears to be closest to goal

  23. a b g h=2 h=4 c h h=1 h=1 d h=1 h=0 i e h=1 g h=0 Greedy search • Use as an evaluation function f(n) = h(n) • Selects node to expand believed to be closest (hence “greedy”) to a goal node (i.e., select node with smallest f value) • as in the example. • Assuming all arc costs are 1, then greedy search will find goal g, which has a solution cost of 5. • However, the optimal solution is the path to goal I with cost 3.

  24. Greedy best-first search example

  25. Greedy best-first search example

  26. Greedy best-first search example

  27. Greedy best-first search example

  28. Hill Climbing & Gradient Descent • For artefact-only problems (don’t care about the path) • Depends on some e(state) • Hill climbing tries to maximise score e • Gradient descent tries to minimise cost e (the same strategy!) • Randomly choose a state • Only choose actions which improve e • If cannot improve e, then perform a random restart • Choose another random state to restart the search from • Only ever have to store one state (the present one) • Can’t have cycles as e always improves

  29. Hill Climbing - Algorithm 1. Pick a random point in the search space 2. Consider all the neighbors of the current state 3. Choose the neighbor with the best quality and move to that state 4. Repeat 2 thru 4 until all the neighboring states are of lower quality 5. Return the current state as the solution state

  30. Example: 8 Queens • Place 8 queens on board • So no one can “take” another • Gradient descent search • Throw queens on randomly • e = number of pairs which can attack each other • Move a queen out of other’s way • Decrease the evaluation function • If this can’t be done • Throw queens on randomly again

  31. Hill-climbing search • Looks one step ahead to determine if any successor is better than the current state; if there is, move to the best successor. • Rule:If there exists a successor s for the current state n such that • h(s) < h(n) and • h(s) ≤h(t) for all the successors t of n, then move from n to s. Otherwise, halt at n.

  32. Hill-climbing search • Similar to Greedy search in that it uses h(), but does not allow backtracking or jumping to an alternative path since it doesn’t “remember” where it has been.

  33. 5 8 3 1 6 4 7 2 4 8 1 4 7 6 5 2 3 1 3 5 2 8 5 7 5 6 3 8 4 7 6 3 1 8 1 7 6 4 5 1 3 8 4 7 6 2 2 2 Hill climbing example start h = 0 goal h = -4 -2 -5 -5 h = -3 h = -1 -3 -4 h = -2 h = -3 -4 f (n) = -(number of tiles out of place)

  34. B S G C G g = 10 g=0 g = 11 g=15 g=5 A g=1 Maths: g 10 2 Expansion Order(blank if still in fringe) 1 3 4 1 5 5 15 The goal state is achieved.In relation to path cost, UCS has found the optimal route with 4 expansions. Press space to end. Note that G is split into two nodes corresponding to the two paths. This converts the graph to a rooted tree

  35. A* Search Algorithm Evaluation function f(n)=h(n)+g(n) h(n)estimated cost to goal from n g(n)cost so far to reach n A* uses admissible heuristics, i.e., h(n) ≤ h*(n) where h*(n) is the true cost from n. A* Search finds the optimal path

  36. A* search • Best-known form of best-first search. • Idea: avoid expanding paths that are already expensive. • Combines uniform-cost and greedy search • Evaluation function f(n)=g(n) + h(n) • g(n) the cost (so far) to reach the node • h(n) estimated cost to get from the node to the goal • f(n) estimated total cost of path through n to goal • Implementation: Expand the node nwith minimum f(n)

  37. Example search space start state parent pointer 8 0 S arc cost 8 1 5 1 A B 4 C 5 8 3 8 3 8 h value 7 4 3 g value D E 4 8  G 9  0 goal state

  38. A* search example

  39. A* search example

  40. A* search example

  41. A* search example

  42. A* search example

  43. A* search example

  44. London A* Example: Route Finding • First states to try: • Birmingham, Peterborough • f(n) = distance from London + crow flies distance from state • i.e., solid + dotted line distances • f(Peterborough) = 120 + 155 = 275 • f(Birmingham) = 130 + 150 = 280 • Hence expand Peterborough • But must go through Leeds from Notts • So later Birmingham is better Liverpool Leeds 135 Nottingham 150 155 Birmingham Peterborough 130 120

  45. Properties of A$^*$ • Complete? Yes (unless there are infinitely many nodes with f ≤ f(G) ) • Time? Exponential • Space? Keeps all nodes in memory • Optimal? Yes

  46. B 2714 start City Map 300 200 50 630 N 2427 P 1841 Q 2848 600 780 685 950 570 1430 1350 W 1974 I 1190 C 1170 120 700 890 A 1318 1025 1220 1080 M 725 K 480 775 740 340 O 0 J 1666 730 600 870 goal

  47. B Open B 2714 + 0 2714 Close

More Related