1 / 22

Fall 2009 Marco Valtorta mgv@cse.sc

CSCE 580 Artificial Intelligence Ch.5 [P]: Propositions and Inference Sections 5.5-5.7: Complete Knowledge Assumption, Abduction, and Causal Models. Fall 2009 Marco Valtorta mgv@cse.sc.edu. Acknowledgment. The slides are based on [AIMA] and other sources, including other fine textbooks

angelicac
Download Presentation

Fall 2009 Marco Valtorta mgv@cse.sc

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CSCE 580Artificial IntelligenceCh.5 [P]: Propositions and InferenceSections 5.5-5.7: Complete Knowledge Assumption, Abduction, and Causal Models Fall 2009 Marco Valtorta mgv@cse.sc.edu

  2. Acknowledgment • The slides are based on [AIMA] and other sources, including other fine textbooks • David Poole, Alan Mackworth, and Randy Goebel. Computational Intelligence: A Logical Approach. Oxford, 1998 • A second edition (by Poole and Mackworth) is under development. Dr. Poole allowed us to use a draft of it in this course • Ivan Bratko. Prolog Programming for Artificial Intelligence, Third Edition. Addison-Wesley, 2001 • The fourth edition is under development • George F. Luger. Artificial Intelligence: Structures and Strategies for Complex Problem Solving, Sixth Edition. Addison-Welsey, 2009

  3. Example of Clark’s Completion

  4. Negation as Failure

  5. Non-monotonic Reasoning

  6. Example of Non-monotonic Reasoning

  7. Bottom-up Negation as Failure Inference Procedure

  8. Top-down Negation as Failure Inference Procedure

  9. Top-down Negation as Failure Inference Procedure (updated on 2009-10-29)

  10. Abduction Abduction is a form of reasoning where assumptions are made to explain observations. • For example, if an agent were to observe that some light was not working, it can hypothesize what is happening in the world to explain why the light was not working. • An intelligent tutoring system could try to explain why a student is giving some answer in terms of what the student understands and does not understand. • The term abduction was coined by Peirce (1839–1914) to differentiate this type of reasoning from deduction, which is determining what logically follows from a set of axioms, and induction, which is inferring general relationships from examples.

  11. Abduction with Horn Clauses and Assumables

  12. Abduction Example

  13. Another Abduction Example: a Causal Model A causal network

  14. Consistency-based vs. Abductive Diagnosis Determining what is going on inside a system based on observations about the behavior is the problem of diagnosis or recognition. • In abductive diagnosis, the agent hypothesizes diseases and malfunctions, as well as that some parts are working normally, in order to explain the observed symptoms. • This differs from consistency-based diagnosis (page 187) in that the designer models faulty behavior as well as normal behavior, and the observations are explained rather than added to the knowledge base. • Abductive diagnosis requires more detailed modeling and gives more detailed diagnoses, as the knowledge base has to be able to actually prove the observations. • It also allows an agent to diagnose systems where there is no normal behavior. For example, in an intelligent tutoring system, observing what a student does, the tutoring system can hypothesize what the student understands and does not understand, which can the guide the action of the tutoring system.

  15. Example of Abductive Diagnosis In abductive diagnosis, we need to axiomatize what follows from faults as well as from normality assumptions. For each atom that could be observed, we axiomatize how it could be produced. This could be seen in design terms as a way to make sure the light is on: put both switches up or both switches down, and ensure the switches all work. It could also be seen as a way to determine what is going on if the agent observed l1 is lit: one of these two scenarios must hold.

  16. Inference Procedures for Abduction The bottom-up and top-down implementations for assumption-based reasoning with Horn clauses (page 190) can both be used for abduction. • The bottom-up implementation of Figure 5.9 (page 190) computes, in C, the minimal explanations for each atom. Instead of returning {A: <false, A> in C}, return the set of assumptions for each atom. The pruning of supersets of assumptions discussed in the text can also be used. • The top-down implementation can be used to find the explanations of any g by generating the conflicts, and, using the same code and knowledge base, proving g instead of false. The minimal explanations of g are the minimal sets of assumables collected to prove g that are not subsets of conflicts.

  17. Inference Procedures for Abduction, ctd. Bottom up Top down

  18. Causal Models There are many decisions the designer of an agent needs to make when designing knowledge base for a domain. For example, consider two propositions a and b, both of which are true. There are many choices of how to write this. • A designer could specify have both a and b as atomic clauses, treating both as primitive. • A designer could have a as primitive and b as derived, stating a as an atomic clause and giving the rule b<-a. • Alternatively, the designer could specify the atomic clause b and the rule a<-b, treating b as primitive and a as derived. • These representations are logically equivalent; they cannot be distinguished logically. However, they have different effects when the knowledge base is changed. Suppose a was no longer true for some reason. In the first and third representations, b would still be true and in the second representation b would no longer true. • A causal model is a representation of a domain that predicts the results of interventions. An intervention is an action that forces a variable to have a particular value.

  19. Causal vs. Evidential Models In order to predict the effect of interventions, a causal model represents how the cause implies its effect. When the cause is changed, its effect should be changed. An evidential model represents a domain in the other direction,from effect to cause.

  20. Another Causal Model Example

  21. Parts of a Causal Model

  22. Using a Causal Model

More Related