1 / 81

Artificial Intelligence

Artificial Intelligence. Search Problem (2). Oracle path. Uninformed and informed searches. Since we know what the goal state is like, is it possible get there faster?. Breadth-first search. Heuristic Search.

lani-lawson
Download Presentation

Artificial Intelligence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Artificial Intelligence Search Problem (2)

  2. Oracle path Uninformed and informed searches Since we know what the goal state is like, is it possible get there faster? Breadth-first search

  3. Heuristic Search Heuristics means choosing branches in a state space (when no exact solution available as in medical diagnostic or computational cost very high as in chess) that are most likely to be acceptable problem solution.

  4. Informed search • So far, have assumed that no nongoal state looks better than another • Unrealistic • Even without knowing the road structure, some locations seem closer to the goal than others • Some states of the 8s puzzle seem closer to the goal than others • Makes sense to expand closer-seeming nodes first

  5. Heuristic Merriam-Webster's Online Dictionary Heuristic (pron. \hyu-’ris-tik\): adj. [from Greek heuriskein to discover.] involving or serving as an aid to learning, discovery, or problem-solving by experimental and especially trial-and-error methods The Free On-line Dictionary of Computing (15Feb98) heuristic 1. <programming> A rule of thumb, simplification or educated guess that reduces or limits the search for solutions in domains that are difficult and poorly understood. Unlike algorithms, heuristics do not guarantee feasible solutions and are often used with no theoretical guarantee. 2. <algorithm> approximation algorithm.

  6. A heuristic function • Let evaluation function h(n) (heuristic) • h(n)= estimated cost of the cheapest path from node n to goal node. • If n is goal thenh(n)=0

  7. G 3 C F 4 4 B 5 E 4 2 • Define f(T) = the straight-line distance from T to G 5 D 10.4 6.7 A B C 4 A The estimate can be wrong! 11 4 S 3 G 8.9 S F 6.9 3 D E Examples (1): • Imagine the problem of finding a route on a road map and that the NET below is the road map:

  8. A Quick Review • g(n) = cost from the initial state to the current state n • h(n) = estimated cost of the cheapest path from node n to a goal node • f(n) = evaluation function to select a node for expansion (usually the lowest cost node)

  9. D E B C 75 A 150 125 50 100 60 75 80 75 80

  10. D E B C 75 A 150 125 50 100 60 75 80 75 80  50

  11. D E B C 75 A 150 125 50 100 60 75 80 75 80  125

  12. D E B C 75 A 150 125 50 100 60 75 80 75 80  200

  13. D E B C 75 A 150 125 50 100 60 75 80 75 80  300

  14. D E B C 75 A 150 125 50 100 60 75 80 75 80  450

  15. A D E B C 75 150 125 50 100 60 75 80 75 80  380

  16. London Heuristic Functions • Estimate of path cost h • From state to nearest solution • h(state) >= 0 • h(solution) = 0 • Example: straight line distance • As the crow flies in route finding • Where does h come from? • maths, introspection, inspection or programs (e.g. ABSOLVER) Liverpool Leeds 135 Nottingham 155 75 Peterborough 120

  17. Romania with straight-line dist.

  18. f2 f1 = 4 = 4 1 1 3 3 2 2 • f2(T) = number or incorrectly placed tiles on board: • gives (rough!) estimate of how far we are from goal 8 8 4 4 5 5 6 6 7 7 Examples (2): 8-puzzle • f1(T) = the number correctly placed tiles on the board: Most often, ‘distance to goal’ heuristics are more useful !

  19. f2 1 3 2 8 4 5 6 7 Examples (3):Manhattan distance • f3(T) = the sum of ( the horizontal + vertical distance that each tile is away from its final destination): • gives a better estimate of distance from the goal node = 1 + 1 + 2 + 2 = 6

  20. = v( ) + v( ) + v( ) + v( ) - v( ) - v( ) Examples (4):Chess: • F(T) = (Value count of black pieces) - (Value count of white pieces) f

  21. Heuristic Evaluation Function • It evaluate the performance of the different heuristics for solving the problem. f(n) = g(n) + h(n) • Where f is the evaluation function • G(n) the actual length of the path from state n to start state • H(n) estimate the distance from state n to the goal

  22. Search Methods • Best-first search • Greedy best-first search • A* search • Hill-climbing search • Genetic algorithms

  23. Best-First Search • Evaluation function f gives cost for each state • Choose state with smallest f(state) (‘the best’) • Agenda: f decides where new states are put • Graph: f decides which node to expand next • Many different strategies depending on f • For uniform-cost search f = path cost • greedy best-first search • A* search

  24. Greedy best-first search • Evaluation function f(n) = h(n) (heuristic) = estimate of cost from n to goal • Ignores the path cost • Greedy best-first search expands the node that appears to be closest to goal

  25. a b g h=2 h=4 c h h=1 h=1 d h=1 h=0 i e h=1 g h=0 Greedy search • Use as an evaluation function f(n) = h(n) • Selects node to expand believed to be closest (hence “greedy”) to a goal node (i.e., select node with smallest f value) • as in the example. • Assuming all arc costs are 1, then greedy search will find goal g, which has a solution cost of 5. • However, the optimal solution is the path to goal I with cost 3.

  26. Romania with step costs in km

  27. Greedy best-first search example

  28. Greedy best-first search example

  29. Greedy best-first search example

  30. Greedy best-first search example

  31. Optimal Path

  32. Greedy Best-First Search Algorithm Input: State Space Ouput: failure or path from a start state to a goal state. Assumptions: • L is a list of nodes that have not yet been examined ordered by their h value. • The state space is a tree where each node has a single parent. • Set L to be a list of the initial nodes in the problem. • While L is not empty • Pick a node n from the front of L. • If n is a goal node • stop and return it and the path from the initial node to n. Else • remove n from L. • For each child c of n • insert c into L while preserving the ordering of nodes in L and labelling c with its path from the initial node as well as its h value. End for End if End while Return failure

  33. Properties of greedy best-first search • Complete? • Not unless it keeps track of all states visited • Otherwise can get stuck in loops (just like DFS) • Optimal? • No – we just saw a counter-example • Time? • O(bm), can generate all nodes at depth m before finding solution • m = maximum depth of search space • Space? • O(bm) – again, worst case, can generate all nodes at depth m before finding solution

  34. Uniform Cost Search • Let g(n) be the sum of the edges costs from root to node n. If g(n) is our overall cost function, then the best first search becomes Uniform Cost Search, also known as Dijkstra’s single-source-shortest-path algorithm . • Initially the root node is placed in Open with a cost of zero. At each step, the next node n to be expanded is an Open node whose cost g(n) is lowest among all Open nodes.

  35. a 2 1 c b 2 1 2 1 e c f g c d c Example of Uniform Cost Search • Assume an example tree with different edge costs, represented by numbers next to the edges. Notations for this example: generated node expanded node

  36. Uniform-cost search Sample 0

  37. Uniform-cost search Sample 75 X 140 118

  38. Uniform-cost search Sample 146 X X 140 118

  39. Uniform-cost search Sample 146 X X 140 X 229

  40. Uniform-cost search • Complete? Yes • Time? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε)) where C* is the cost of the optimal solution • Space? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε)) • Optimal? Yes

  41. Hill Climbing & Gradient Descent • For artefact-only problems (don’t care about the path) • Depends on some e(state) • Hill climbing tries to maximise score e • Gradient descent tries to minimise cost e (the same strategy!) • Randomly choose a state • Only choose actions which improve e • If cannot improve e, then perform a random restart • Choose another random state to restart the search from • Only ever have to store one state (the present one) • Can’t have cycles as e always improves

  42. Hill-climbing search • Problem: depending on initial state, can get stuck in local maxima

  43. Hill Climbing - Algorithm 1. Pick a random point in the search space 2. Consider all the neighbors of the current state 3. Choose the neighbor with the best quality and move to that state 4. Repeat 2 thru 4 until all the neighboring states are of lower quality 5. Return the current state as the solution state

  44. Example: 8 Queens • Place 8 queens on board • So no one can “take” another • Gradient descent search • Throw queens on randomly • e = number of pairs which can attack each other • Move a queen out of other’s way • Decrease the evaluation function • If this can’t be done • Throw queens on randomly again

  45. Hill-climbing search • Looks one step ahead to determine if any successor is better than the current state; if there is, move to the best successor. • Rule:If there exists a successor s for the current state n such that • h(s) < h(n) and • h(s) ≤h(t) for all the successors t of n, then move from n to s. Otherwise, halt at n.

  46. Hill-climbing search • Similar to Greedy search in that it uses h(), but does not allow backtracking or jumping to an alternative path since it doesn’t “remember” where it has been.

  47. A* Search Algorithm Evaluation function f(n)=h(n)+g(n) h(n)estimated cost to goal from n g(n)cost so far to reach n A* uses admissible heuristics, i.e., h(n) ≤ h*(n) where h*(n) is the true cost from n. A* Search finds the optimal path

  48. A* search • Best-known form of best-first search. • Idea: avoid expanding paths that are already expensive. • Combines uniform-cost and greedy search • Evaluation function f(n)=g(n) + h(n) • g(n) the cost (so far) to reach the node • h(n) estimated cost to get from the node to the goal • f(n) estimated total cost of path through n to goal • Implementation: Expand the node nwith minimum f(n)

More Related