1 / 59

Midterm Review

Midterm Review. Dr. Bernard Chen Ph.D. University of Central Arkansas Spring 2011. Outline. Ch3 Structures and Strategies for State Space Search Ch4 Heuristic Search Ch5 Stochastic Search. Introduction to Representation .

tracy
Download Presentation

Midterm Review

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Midterm Review Dr. Bernard Chen Ph.D. University of Central Arkansas Spring 2011

  2. Outline • Ch3 Structures and Strategies for State Space Search • Ch4 Heuristic Search • Ch5 Stochastic Search

  3. Introduction to Representation • The representation function is to capture the critical features of a problem and make that information accessible to a problem solving procedure • Expressiveness (the result of the feature abstracted) and efficiency (the computational complexity) are major dimensions for evaluating knowledge representation

  4. Introduction to Search • Given a representation, the second component of intelligent problem solving is search • Human generally consider a number of alternatives strategies on their way to solve a problem • Such as chess • Player reviews alternative moves, select the “best” move • A player can also consider a short term gain

  5. Introduction to Search • We can represent this collection of possible moves by regarding each board as a state in a graph • The link of the graph represent legal move • The resulting structure is a state space graph

  6. “tic-tac-toe”state space graph

  7. State Space Representation • State space search characterizes problem solving as the process of finding a solution path form the start state to a goal • A goal may describe a state, such as winning board in tic-tac-toe

  8. State Space Representation • A goal in configuration in the 8-puzzle

  9. State Space Representation • The Traveling salesperson problem • Suppose a salesperson has five cities to visit and then must return home • The goal of the problem is to find the shortest path for the salesperson to travel

  10. State Space Representation

  11. BFS and DFS • In addition to specifying a search direction (data-driven or goal-driven), a search algorithm must determine the order in which states are examined in the graph • Two possibilities: • Depth-first search • Breadth-first search

  12. 8-puzzle BFS

  13. 8-puzzle DFS

  14. Outline • Ch3 Structures and Strategies for State Space Search • Ch4 Heuristic Search • Ch5 Stochastic Search

  15. Introduction George Polya defines heuristic as: “the study of the methods and rules of discovery and invention” This meaning can be traced to the term’s Greek root, the verb eurisco, which means “I discover” When Archimedes emerged from his famous bath clutching the golden crown, he shouted “Eureka!!”, meaning I have found it IN AI, heuristics are formalized as Rules for choosing those branches in a state space that are most likely to lead to an acceptable problem solution

  16. Introduction Consider heuristic in the game of tic-tac-toe A simple analysis put the total number of states for 9! Symmetry reduction decrease the search space Thus, there are not 9 but 3 initial moves: to a corner to the center of a side to the center of the grid

  17. Introduction

  18. Introduction Use of symmetry on the second level further reduces the number of path to 3* 12 * 7! A simple heuristic, can almost eliminate search entirely: we may move to the state in which X has the most winning opportunity In this case, X takes the center of the grid as the first step

  19. Introduction

  20. Introduction

  21. Hill-Climbing The simplest way to implement heuristic search is through a procedure called hill-climbing It expend the current state of the search and evaluate its children The Best child is selected for further expansion Neither it sibling nor its parent are retained Tic-Tac-Toe we just saw is an example

  22. Dynamic Programming (DP) DP keeps track of and reuses of multiple interacting and interrelated subproblems An example might be reuse the subseries solutions within the solution of the Fibonacci series The technique of subproblem caching for reuse is sometimes called memorizing partial subgoal solutions

  23. Dynamic Programming (DP)

  24. Dynamic Programming (DP) • BAADDCABDDA • BBA_DC_B_ _A

  25. Best First Search • For the 8-puzzle game, we may add 3 different types of information into the code: • The simplest heuristic counts the tiles out of space in each state • A “better” heuristic would sum all the distances by which the tiles are out of space

  26. Best First Search

  27. Best First Search

  28. Minimax Procedure on Exhaustively Search Graphs • Let’s consider a variant of the game nim • To play this game, a number of tokens are placed on a table between the two players • At each move, the player must divide a pile of tokens into two nonempty piles of different sizes • Thus, 6 tokens my be divided into piles of 5&1 or 4&2 but not 3&3 • The first player who can no longer make a move loses the game

  29. Minimax Procedure on Exhaustively Search Graphs State space for a variant of nim. Each state partitions the seven matches into one or more piles.

  30. Minimax Procedure on Exhaustively Search Graphs

  31. Minimax Procedure on Exhaustively Search Graphs • Minimax propagates these values upthe graph through successive parent nodes according to the rule: • If the parent is a MAX node, give it the maximum value among its children • If the parent is a MIN node, give it the minimum value among its children

  32. Minimax Procedure on Exhaustively Search Graphs

  33. Exercises • Perform MINIMAX on the tree shown in Figure 4.30.

  34. Exercises

  35. Exercises • Consider 3D tic-tac-toe. • How to represent state search space? • Analysis the complexity of the state space? • Propose a heuristic for playing this game

  36. Outline • Ch3 Structures and Strategies for State Space Search • Ch4 Heuristic Search • Ch5 Stochastic Search

  37. Bayes’ Theorem • P(A), P(B) is the prior probability • P(A|B) is the conditional probability of A, given B. • P(B|A) is the conditional probability of B, given A.

  38. Exercises • Suppose an automobile insurance company classifies a driver as good, average, or bad. • Of all their insured drivers, 25% are classified good, 50% are average, and 25% are bad. • Suppose for the coming year, a good driver has a 5% chance of having an accident, and average driver has 15% chance of having an accident, and a bad driver has a 25% chance. • If John had an accident in the past year what is the probability that John are a good driver?

  39. Exercises

  40. Naïve Bayesian Classifier: Training Dataset Class: C1:buys_computer = ‘yes’ C2:buys_computer = ‘no’ Data sample X = (age <=30, Income = medium, Student = yes Credit_rating = Fair)

  41. Naïve Bayesian Classifier: An Example P(Ci): P(buys_computer = “yes”) = 9/14 = 0.643 P(buys_computer = “no”) = 5/14= 0.357 Compute P(X|Ci) for each class P(age = “<=30” | buys_computer = “yes”) = 2/9 = 0.222 P(age = “<= 30” | buys_computer = “no”) = 3/5 = 0.6 P(income = “medium” | buys_computer = “yes”) = 4/9 = 0.444 P(income = “medium” | buys_computer = “no”) = 2/5 = 0.4 P(student = “yes” | buys_computer = “yes) = 6/9 = 0.667 P(student = “yes” | buys_computer = “no”) = 1/5 = 0.2 P(credit_rating = “fair” | buys_computer = “yes”) = 6/9 = 0.667 P(credit_rating = “fair” | buys_computer = “no”) = 2/5 = 0.4

  42. Naïve Bayesian Classifier: An Example X = (age <= 30 , income = medium, student = yes, credit_rating = fair) P(X|Ci) : P(X|buys_computer = “yes”) = 0.222 x 0.444 x 0.667 x 0.667 = 0.044 P(X|buys_computer = “no”) = 0.6 x 0.4 x 0.2 x 0.4 = 0.019 P(X|Ci)*P(Ci) : P(X|buys_computer = “yes”) * P(buys_computer = “yes”) = 0.028 P(X|buys_computer = “no”) * P(buys_computer = “no”) = 0.007 Therefore, X belongs to class (“buys_computer = yes”)

  43. Naïve Bayesian Classifier: An Example Test on the following example: X = (age > 30, Income = Low, Student = yes Credit_rating = Excellent)

  44. So how is “Tomato” pronounced • A probabilistic finite state acceptor for the pronunciation of “tomato”, adapted from Jurafsky and Martin (2000).

  45. Outline • Expert System introduction • Rule-Based Expert System • Goal Driven Approach • Data Driven Approach • Model-Based Expert System

  46. The Design of Rule-Based Expert System • architecture of a typical expert system for a particular problem domain.

  47. Strategies for state space search • In data driven search, also called forward chaining, the problem solver begins with the given facts of the problem and set of legal moves for changing state • This process continues until (we hope!!) it generates a path that satisfies the goal condition

  48. Strategies for state space search • An alternative approach (Goal Driven) is start with the goal that we want to solve • See what rules can generate this goal and determine what conditions must be true to use them • These conditions become the new goals • Working backward through successive subgoals until (we hope again!) it work back to

  49. A unreal Expert System Example Rule 1: if the engine is getting gas, and the engine will turn over, then the problem is spark plugs. Rule 2: if the engine does not turn over, and the lights do not come on then the problem is battery or cables. Rule 3: if the engine does not turn over, and the lights do come on then the problem is the starter motor. Rule 4: if there is gas in the fuel tank, and there is gas in the carburetor then the engine is getting gas.

More Related