1 / 33

Forward and Backward Chaining

Forward and Backward Chaining. Rule-Based Systems.

arnoldjones
Download Presentation

Forward and Backward Chaining

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Forward and Backward Chaining

  2. Rule-Based Systems Instead of representing knowledge in a relatively declarative, static way (as a bunch of things that are true), rule-based system represent knowledge in terms of a bunch of rules that tell you what you should do or what you could conclude in different situations. A rule-based system consists of a bunch of IF-THEN rules, a bunch of facts, and some interpreter controlling the application of the rules, given the facts.

  3. Two broad kinds of rule system • Forward chaining systems and • Backward chaining systems. Forward chaining is a data driven method of deriving a particular goal from a given knowledge base and set of inference rules. Inference rules are applied by matching facts to the antecedents of consequence relations in the knowledge base

  4. Cont’d… In a forward chaining system you start with the initial facts, and keep using the rules to draw new conclusions (or take certain actions) given those facts. Forward chaining systems are primarily data-driven Inference rules are successively applied to elements of the knowledge base until the goal is reached

  5. Cont’d… A search control method is needed to select which element(s) of the knowledge base to apply the inference rule to at any point in the deduction. facts in the system are represented in a working memory which is continually updated.

  6. Cont’d… Rules in the system represent possible actions to take when specified conditions hold on items in the working memory they are sometimes called condition-action rules The conditions are usually patterns that must match items in the working memory.

  7. Forward Chaining Systems actions usually involve adding or deleting items from the working memory. interpreter controls the application of the rules, given the working memory, thus controlling the system's activity. It is based on a cycle of activity sometimes known as a recognize-act cycle The system first checks to find all the rules whose conditions hold selects one and performs the actions in the action part of the rule selection of a rule to fire is based on fixed strategies, known as conflict resolution strategies

  8. Forward Chaining Systems The actions will result in a new working memory, and the cycle begins again. This cycle will be repeated until either no rules fire, or some specified goal state is satisfied. Rule-based systems vary greatly in their details and syntax, so the following examples are only illustrative

  9. Forward Chaining Example • Knowledge Base: • If [X croaks and eats flies] Then [X is a frog] • If [X chirps and sings] Then [X is a canary] • If [X is a frog] Then [X is colored green] • If [X is a canary] Then [X is colored yellow] • [Fritz croaks and eats flies] • Goal: • [Fritz is colored Y]?

  10. Forward Chaining Example • Knowledge Base • If [X croaks and eats flies] • Then [X is a frog] • If [X chirps and sings] • Then [X is a canary] • If [X is a frog] • Then [X is colored green] • If [X is a canary] • Then [X is colored yellow] • [Fritz croaks and eats flies] • Goal • [Fritz is colored Y]? CPSC 433 Artificial Intelligence

  11. Forward Chaining Example • Knowledge Base • If [X croaks and eats flies] • Then [X is a frog] • If [X chirps and sings] • Then [X is a canary] • If [X is a frog] • Then [X is colored green] • If [X is a canary] • Then [X is colored yellow] • [Fritz croaks and eats flies] • Goal • [Fritz is colored Y]? CPSC 433 Artificial Intelligence

  12. Forward Chaining Example • Knowledge Base • If [X croaks and eats flies] • Then [X is a frog] • If [X chirps and sings] • Then [X is a canary] • If [X is a frog] • Then [X is colored green] • If [X is a canary] • Then [X is colored yellow] • [Fritz croaks and eats flies] • Goal • [Fritz is colored Y]? [Fritz croaks and eats flies] If [X croaks and eats flies] Then [X is a frog] [Fritz is a frog] CPSC 433 Artificial Intelligence

  13. Forward Chaining Example • Knowledge Base • If [X croaks and eats flies] • Then [X is a frog] • If [X chirps and sings] • Then [X is a canary] • If [X is a frog] • Then [X is colored green] • If [X is a canary] • Then [X is colored yellow] • [Fritz croaks and eats flies] • [Fritz is a frog] • Goal • [Fritz is colored Y]? [Fritz croaks and eats flies] If [X croaks and eats flies] Then [X is a frog] [Fritz is a frog] CPSC 433 Artificial Intelligence

  14. Forward Chaining Example • Knowledge Base • If [X croaks and eats flies] • Then [X is a frog] • If [X chirps and sings] • Then [X is a canary] • If [X is a frog] • Then [X is colored green] • If [X is a canary] • Then [X is colored yellow] • [Fritz croaks and eats flies] • [Fritz is a frog] • Goal • [Fritz is colored Y]? [Fritz croaks and eats flies] If [X croaks and eats flies] Then [X is a frog] [Fritz is a frog] ? CPSC 433 Artificial Intelligence

  15. Forward Chaining Example • Knowledge Base • If [X croaks and eats flies] • Then [X is a frog] • If [X chirps and sings] • Then [X is a canary] • If [X is a frog] • Then [X is colored green] • If [X is a canary] • Then [X is colored yellow] • [Fritz croaks and eats flies] • [Fritz is a frog] • Goal • [Fritz is colored Y]? If [X croaks and eats flies] Then [X is a frog] [Fritz croaks and eats flies] [Fritz is a frog] CPSC 433 Artificial Intelligence

  16. Forward Chaining Example • Knowledge Base • If [X croaks and eats flies] • Then [X is a frog] • If [X chirps and sings] • Then [X is a canary] • If [X is a frog] • Then [X is colored green] • If [X is a canary] • Then [X is colored yellow] • [Fritz croaks and eats flies] • [Fritz is a frog] • Goal • [Fritz is colored Y]? If [X croaks and eats flies] Then [X is a frog] [Fritz croaks and eats flies] If [X is a frog] Then [X is colored green] [Fritz is a frog] [Fritz is colored green] CPSC 433 Artificial Intelligence

  17. Forward Chaining Example • Knowledge Base • If [X croaks and eats flies] • Then [X is a frog] • If [X chirps and sings] • Then [X is a canary] • If [X is a frog] • Then [X is colored green] • If [X is a canary] • Then [X is colored yellow] • [Fritz croaks and eats flies] • [Fritz is a frog] • [Fritz is colored green] • Goal • [Fritz is colored Y]? If [X croaks and eats flies] Then [X is a frog] [Fritz croaks and eats flies] If [X is a frog] Then [X is colored green] [Fritz is a frog] [Fritz is colored green] CPSC 433 Artificial Intelligence

  18. Forward Chaining Example If [X croaks and eats flies] Then [X is a frog] • Knowledge Base • If [X croaks and eats flies] • Then [X is a frog] • If [X chirps and sings] • Then [X is a canary] • If [X is a frog] • Then [X is colored green] • If [X is a canary] • Then [X is colored yellow] • [Fritz croaks and eats flies] • [Fritz is a frog] • [Fritz is colored green] • Goal • [Fritz is colored Y]? [Fritz croaks and eats flies] If [X is a frog] Then [X is colored green] [Fritz is a frog] [Fritz is colored green] ? CPSC 433 Artificial Intelligence

  19. Forward Chaining Example If [X croaks and eats flies] Then [X is a frog] • Knowledge Base • If [X croaks and eats flies] • Then [X is a frog] • If [X chirps and sings] • Then [X is a canary] • If [X is a frog] • Then [X is colored green] • If [X is a canary] • Then [X is colored yellow] • [Fritz croaks and eats flies] • [Fritz is a frog] • [Fritz is colored green] • Goal • [Fritz is colored Y]? [Fritz croaks and eats flies] If [X is a frog] Then [X is colored green] [Fritz is a frog] [Fritz is colored green] CPSC 433 Artificial Intelligence

  20. Forward Chaining Example If [X croaks and eats flies] Then [X is a frog] • Knowledge Base • If [X croaks and eats flies] • Then [X is a frog] • If [X chirps and sings] • Then [X is a canary] • If [X is a frog] • Then [X is colored green] • If [X is a canary] • Then [X is colored yellow] • [Fritz croaks and eats flies] • [Fritz is a frog] • [Fritz is colored green] • Goal • [Fritz is colored Y]? [Fritz croaks and eats flies] If [X is a frog] Then [X is colored green] [Fritz is a frog] [Fritz is colored green] [Fritz is colored Y] ? Y = green CPSC 433 Artificial Intelligence

  21. Backward chaining Backward chaining is a goal driven method of deriving a particular goal from a given knowledge base and set of inference rules Inference rules are applied by matching the goal of the search to the consequents of the relations stored in the knowledge base

  22. Cont’d... When such a relation is found, the antecedent of the relation is added to the list of goals (and not into the knowledge base, as is done in forward chaining) Search proceeds in this manner until a goal can be matched against a fact in the knowledge base

  23. Cont’d… As with forward chaining, a search control method is needed to select which goals will be matched against which consequence relations from the knowledge base backward chaining systems are goal-driven

  24. Backward Chaining Systems In a backward chaining system you start with some hypothesis (or goal) you are trying to prove, and keep looking for rules that would allow you to conclude that hypothesis, perhaps setting new sub goals to prove as you go

  25. Cont’d So far we have looked at how rule-based systems can be used to draw new conclusions from existing data, adding these conclusions to a working memory This approach is most useful when you know all the initial facts, but don't have much idea what the conclusion might be If you DO know what the conclusion might be, or have some specific hypothesis to test, forward chaining systems may be inefficient

  26. backward chaining system Note that a backward chaining system does NOT need to update a working memory Instead it needs to keep track of what goals it needs to prove its main hypothesis. Lets take an example ……………..

  27. Forward chaining vs. backward chaining • Data-driven, forward chaining • Starts with the initial given data and search for the goal. • At each iteration, new conclusion (RHS) becomes the pattern to look for next • Working memory contains true sentences (RHS’s). • Stop when the goal is reached. • Goal-driven is the reverse. • Starts with the goal and try to search for the initial given data. • At each iteration, new premise (LHS) becomes the new sub goals, the pattern to look for next • working memory contains sub goals (LHS’s) to be satisfied. • Stop when all the premises (sub goals) of fired productions are reached. • Sense of the arrow is in reality reversed. • Both repeatedly pick the next rule to fire.

  28. Cont’d….. condition  action premise  conclusion

  29. Combining forward- and backward-chaining Begin with data and search forward until the number of states becomes unmanageably large. Switch to goal-directed search to use sub goals to guide state selection.

  30. Strategies These strategies may help in getting reasonable behavior from a forward chaining system, but the most important thing is how we write the rules. They should be carefully constructed, with the preconditions specifying as precisely as possible when different rules should fire. Otherwise we will have little idea or control of what will happen

  31. Conclusion • Goal-driven, backward chaining • Data-driven, forward chaining • conditions first, then actions • working memory contains true statements describing the current environment • actions first, then conditions (sub goals) • working memory contains sub goals to be shown as true

  32. Forwards vs. Backwards Reasoning Whether you use forward or backwards reasoning to solve a problem depends on the properties of your rule set and initial facts. Sometimes, if you have some particular goal (to test some hypothesis), then backward chaining will be much more efficient, as you avoid drawing conclusions from irrelevant facts.

  33. However, sometimes backward chaining can be very wasteful - there may be many possible ways of trying to prove something, and you may have to try almost all of them before you find one that works. Forward chaining may be better if you have lots of things you want to prove when you have a small set of initial facts; and when there tend to be lots of different rules which allow you to draw the same conclusion Backward chaining may be better if you are trying to prove a single fact, given a large set of initial facts, and where, if you used forward chaining, lots of rules would be eligible to fire in any cycle.

More Related