1 / 131

UNIT 2 : AI Problem Solving

UNIT 2 : AI Problem Solving. Define the problem precisely by including specification of initial situation, and final situation constituting the solution of the problem. Analyze the problem to find a few important features for appropriateness of the solution technique.

seal
Download Presentation

UNIT 2 : AI Problem Solving

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. UNIT 2 : AI Problem Solving • Define the problem precisely by including specification of initial situation, and final situation constituting the solution of the problem. • Analyze the problem to find a few important features for appropriateness of the solution technique. • Isolate and represent the knowledge that is necessary for solution. • Select the best problem solving technique. By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  2. State Space • The state space of a problem includes : • An initial state, • One or more goal states. • Sequence of intermediate states through which the system makes transition while applying various rules. • State space may be a tree or a graph. • The state space for WJP can be described as a set of ordered pairs of integers (x,y) such that x=0,1,2,3,or 4 and y= 0,1,2,or 3. the start state is (0,0) and the goal state is (2,n) By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  3. Rules for Water Jug Problem • {(x, y)| x<4 } (4,y) • {(x, y) y<3 } (x,3) • {(x, y) x>0 } (0,y) • {(x, y) |y>0 } (x,0) • {(x, y) | x + y ≥ 4 and y>0} (4, x + y -4 ) • {(x, y) x + y ≥3 and x>0} (x+y-3, 3) • {(x, y) | x+y≤4 and y>0} ( x + y , 0) • {(x, y) | x+y≤3 and x>0} (0, x + y) • (0,2) (2,0) • (2,y) (0,y) • { (x , y) | y >0} (x, y-d) Useless rule • { (x , y) | x>0 } (x-d, y) Useless rule By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  4. Problem Characteristics 1.) Is the problem decomposable? 2) . Can solution steps be ignored or at least undone if they prove unwise?E.g : 8- Puzzle problem , Monkey Banana Problem… In 8 – puzzle we can make a wrong move and to overcome that we can back track and undo that… Based on this problems can be : • Ignorable (e.g : theorem proving) • Recoverable (e.g : 8 - puzzle) • Irrecoverable (e.g: Chess , Playing cards(like Bridge game)) Note : ** Ignorable problems can be solved using a simple control structure that never back tracks. Such a structure is easy to implement. By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  5. **Recoverable problems can be solved by a slightly more complicated control strategy that can be error prone.(Here solution steps can be undone). ** Irrecoverable are solved by a system that exp[ands a great deal of effort Making each decision since decision must be final.(solution steps can’t be undone) 3). Is the Universe Predictable? Can we earlier plan /predict entire move sequences & resulting next state. E.g : In a Bridge game entire sequence of moves can be planned before making final play….. • Certain outcomes : 8- puzzle • Uncertain outcomes : Bridge • Hardest Problems to be solved : Irrecoverable + Uncertain Outcomes 4). Is a good solution absolute / relative? By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  6. 5). Is the solution state or path? 6). Role of Knowledge 7). Requiring interaction with a person 8). Problem classification By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  7. Search Techniques(Blind) • Search strategies following the two properties (Dynamic and Systematic) are • Breadth First Search (BFS) • Depth First Search (BFS) • Problem with the BFS is “Combinatorial explosion”. • Problem with DFS is that it may lead to “blind alley”. • Dead end. • The state which has already been generated. • Exceeds to futility value. By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  8. Advantages Advantages of BFS are • Will not get trapped exploring a blind alley. • Guaranteed to find the solution if exist. The solution found will also be optimal (in terms of no. of applied rules) Advantages of DFS are • Requires less memory. • By chance it may find a solution without examining much of the search space. By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  9. Search Strategies • A search strategy is defined by picking the order of node expansion • Strategies are evaluated along the following dimensions: • completeness: does it always find a solution if one exists? • time complexity: number of nodes generated • space complexity: maximum number of nodes in memory • optimality: does it always find a least-cost solution? • Time and space complexity are measured in terms of • b:maximum branching factor of the search tree • d: depth of the least-cost solution • m: maximum depth of the state space (may be ∞) By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  10. Classification of Search Strategies I. Uninformed Search strategies use only the information available in the problem definition • Breadth-first search • Depth-first search • Depth-limited search • Iterative deepening search • Branch and Bound II . Informed Search (Heuristic Search) • Hill climbing (i) Simple Hill climbing (ii) Steepest Ascent Hill climbing • Best First Search • A*, AO * algorithms • Problem Reduction • Constraint Satisfaction • Means & End Analysis , Simulated Annealing. By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  11. Breadth-first search • Expand shallowest unexpanded node • Implementation: • fringe is a FIFO queue, i.e., new successors go at end By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  12. Breadth-first search • Expand shallowest unexpanded node • Implementation: • fringe is a FIFO queue, i.e., new successors go at end By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  13. Breadth-first search • Expand shallowest unexpanded node • Implementation: • fringe is a FIFO queue, i.e., new successors go at end By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  14. Breadth-first search • Expand shallowest unexpanded node • Implementation: • fringe is a FIFO queue, i.e., new successors go at end By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  15. Properties of breadth-first search • Complete?Yes (if b is finite) • Time?1+b+b2+b3+… +bd + b(bd-1) = O(bd+1) • Space?O(bd+1) (keeps every node in memory) • Optimal? Yes (if cost = 1 per step) • Space is the bigger problem (more than time) By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  16. Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  17. Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  18. Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  19. Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  20. Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  21. Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  22. Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  23. Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  24. Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  25. Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  26. Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  27. Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  28. Properties of depth-first search • Complete? No: fails in infinite-depth spaces, spaces with loops • Modify to avoid repeated states along path • complete in finite spaces. • Time?O(bm): terrible if m is much larger than d • but if solutions are dense, may be much faster than breadth-first • Space?O(bm), i.e., linear space! • Optimal?No By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  29. Comparison b/w DFS & BFS Depth First Search Breadth First Search Downward traversal in the tree. If goal not found up to the leaf node back tracking occurs. Preferred over BFS when search tree is known to have a plentiful no. of goal states else DFS never finds the solution. Depth cut-off point leads to problem. If it is too shallow goals may be missed, if set too deep extra computation of search nodes is required. 5. Since path from initial to current node is stored , less space required. If depth cut-off = d , Space Complexity= O (d). Performed by exploring all nodes at a given depth before moving to next level. If goal not found , many nodes need to be expanded before a solution is found, particularly if tree is too deep. Finds minimal path length solution when one exists. No Cut – off problem. Space complexity = O (b)d By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  30. Depth-limited search = depth-first search with depth limit l, i.e., nodes at depth l have no successors • Recursive implementation: By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  31. Iterative deepening search By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  32. Iterative deepening search l =0 By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  33. Iterative deepening search l =1 By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  34. Iterative deepening search l =2 By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  35. Iterative deepening search l =3 By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  36. Iterative deepening search • Number of nodes generated in a depth-limited search to depth d with branching factor b: NDLS = b0 + b1 + b2 + … + bd-2 + bd-1 + bd • Number of nodes generated in an iterative deepening search to depth d with branching factor b: NIDS = (d+1)b0 + d b^1 + (d-1)b^2 + … + 3bd-2 +2bd-1 + 1bd • For b = 10, d = 5, • NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111 • NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456 • Overhead = (123,456 - 111,111)/111,111 = 11% By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  37. Properties of iterative deepening search • Complete? Yes • Time?(d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd) • Space?O (bd) • Optimal?Yes , if step cost = 1 By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  38. Difference b/w informed & Uninformed search Un -Informed Search Informed Search Nodes in the state are searched mechanically, until the goal is reach or time limit is over / failure occurs. Info about goal state may not be given 3. Blind grouping is done 4. Search efficiency is low. 5. Practical limits on storage available for blind methods. 6. Impractical to solve very large problems. 7. Best solution can be achieved. E.g : DFS , BFS , Branch & Bound , Iterative Deepening …etc. More info. About initial state & operators is available . Search time is less. Some info. About goal is always given. Based on heuristic methods Searching is fast Less computation required Can handle large search problems 7. Mostly a good enough solution is accepted as optimal solution. E.g: Best first search , A* , AO *, hill climbing…etc By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  39. Heuristic Search • Search strategies like DFS and BFS can find out solutions for simple problems. • For complex problems also although DFS and BFS guarantee to find the solutions but these may not be the practical ones. (For TSP time is proportional to N! or it is exponential with branch and bound). • Thus, it is better to sacrifice completeness and find out efficient solution. • Heuristic search techniques improve efficiency by sacrificing claim of completeness and find a solution which is very close to the optimal solution. • Using nearest neighbor heuristic TSP can be solved in time proportional to square of N. By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  40. When more information than the initial state , the operators & goal state is available, size of search space can usually be constrained. If this is the case, better info. available more efficient is the search process. This is called Informed search methods. • Depend on Heuristic information • Heuristic Search improves the efficiency of search process, possibly by sacrificing the claims of completeness. “Heuristics are like tour guides. They are good to the extent that they point in generally interesting directions . Bad to the extent that they may miss points of interest to a particular individuals. E. g : A good general purpose heuristic that is useful for a variety of combinatorial explosion problems is the “Nearest Neighbor Heuristic”. This works by selecting the locally superior alternate at each step. Applying it to Traveling Salesman Problem, following algo is used: By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  41. 1. Arbitrarily select a starting point let say A. 2. To select the next city , look at all the cities not yet visited & select the one closest to the current city…. Go to it Next. 3. Repeat step 2 until all the cities have been visited. Combinatorial Explosion TSP involves n cities with paths connecting the cities. A tour is any path which begins with some starting city , visits each of the other city exactly once & returns to the starting city. • If n cities then no. of different paths among them are (n-1) ! • Time to examine single path is proportional to n . T (total) = n ( n-1) ! = n !, this is total search time required • If n = 10 then 10 ! = 3, 628 , 800 paths are possible. This is very large no.. This phenomenon of growing no. of possible paths as n increases is called “ Combinatorial Explosion” By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  42. Branch and Bound Technique • To over come the problem of combinatorial explosion Branch & Bound Technique is used. • This begins with generating one path at a time , keeping track of shortest (BEST) path so far. This value is used as a Bound(threshold) for future paths. • As paths are constructed one city at a time , algorithm examines and compares it from current bound value. • We give up exploring any path as soon as its partial length becomes greater than shortest path(Bound value) so far…. • This reduces search and increases efficiency but still leaves an exponential no. of paths. By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  43. Heuristic Function • Heuristic function is a function that maps from problem state descriptions to measures of desirability and it is usually represented as a number. • Well designed heuristic functions can play an important role in efficiently guiding a search process towards a solution. • Called Objective function in mathematical optimization problems. • Heuristic function estimates the true merits of each node in the search space • Heuristic function f(n)=g(n) + h(n) • g(n) the cost (so far) to reach the node. • h(n) estimated cost to get from the node to the goal. • f(n) estimated total cost of path through n to goal. By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  44. Heuristic Search Techniques • Generate and Test • Hill Climbing • Simple Hill Climbing • Steepest Hill Climbing • Best First Search • Problem Reduction Technique • Constraint Satisfaction Technique • Means Ends Analysis By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  45. Generate and Test • Generate a possible solution and compare it with the acceptable solution. • Comparison will be simply in terms of yes or no i.e. whether it is a acceptable solution or not? • A systematic generate and test can be implemented as depth first search with backtracking. By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  46. Hill Climbing • It is a variation of generate and test in which feedback from the test procedure is used to help the generator decide which direction to move in the search space. • It is used generally when good heuristic function is available for evaluating but when no other useful knowledge is available. • Simple Hill Climbing: From the current state every time select a state which is better than the current state. • Steepest Hill Climbing: At the current state, select best of the new state which can be generated only if it is better than the current state. • Hill Climbing is a local method because it decides what to do next by looking only at the immediate consequences of its choice rather than by exhaustively exploring all the consequences. By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  47. Problems with Hill Climbing • Both simple and steepest hill climbing may fail to find solution because of the following. • Local Maximum: A state that is better than all its neighbors but is not better than some other states farther away. • A Plateau: Is a flat area of the search space in which a whole set of neighboring states have the same value. • A Ridge: A special kind of local maximum. An area of the search space that is higher than surrounding areas and that itself has slope By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  48. Example • Block World Problem Initial Goal By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  49. Example • Heuristic Function: Following heuristic functions may be used • Local: Add one point for every block that is resting on the thing it is supposed to be resting on. Subtract one point for every block that is sitting on the wrong thing. • Global: For each block that has the correct support structure, add one point for every block in the support structure. For each block that has incorrect support structure, subtract one point for every block in the existing support structure. • With local heuristic function the initial state has the value 4 and the goal state has the value 8 whereas with global heuristic the values are -28 and +28 respectively. By: AnujKhanna(Asst. Prof.) www.uptunotes.com

  50. Example • From the initial state only one move is possible giving a new state with value 6 (-21). • From this state three moves are possible giving three new states with values as 4(-28), 4(-16), and 4(-15). • Thus we see that we are reached to plateau with local evaluation. • With global evaluation next state to be selected (with steepest hill climbing) is that with the value as -15 which may lead to the solution. • Why we are not able to find the solution? Because of deficiency of search technique are because of poor heuristic function. By: AnujKhanna(Asst. Prof.) www.uptunotes.com

More Related