fall 2009 marco valtorta mgv@cse sc edu n.
Download
Skip this Video
Download Presentation
Fall 2009 Marco Valtorta mgv@cse.sc

Loading in 2 Seconds...

play fullscreen
1 / 22

Fall 2009 Marco Valtorta mgv@cse.sc - PowerPoint PPT Presentation


  • 119 Views
  • Uploaded on

CSCE 580 Artificial Intelligence Ch.5 [P]: Propositions and Inference Sections 5.5-5.7: Complete Knowledge Assumption, Abduction, and Causal Models. Fall 2009 Marco Valtorta mgv@cse.sc.edu. Acknowledgment. The slides are based on [AIMA] and other sources, including other fine textbooks

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Fall 2009 Marco Valtorta mgv@cse.sc' - gail-wiley


Download Now An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
fall 2009 marco valtorta mgv@cse sc edu

CSCE 580Artificial IntelligenceCh.5 [P]: Propositions and InferenceSections 5.5-5.7: Complete Knowledge Assumption, Abduction, and Causal Models

Fall 2009

Marco Valtorta

mgv@cse.sc.edu

acknowledgment
Acknowledgment
  • The slides are based on [AIMA] and other sources, including other fine textbooks
  • David Poole, Alan Mackworth, and Randy Goebel. Computational Intelligence: A Logical Approach. Oxford, 1998
    • A second edition (by Poole and Mackworth) is under development. Dr. Poole allowed us to use a draft of it in this course
  • Ivan Bratko. Prolog Programming for Artificial Intelligence, Third Edition. Addison-Wesley, 2001
    • The fourth edition is under development
  • George F. Luger. Artificial Intelligence: Structures and Strategies for Complex Problem Solving, Sixth Edition. Addison-Welsey, 2009
abduction
Abduction

Abduction is a form of reasoning where assumptions are made to explain observations.

  • For example, if an agent were to observe that some light was not working, it can hypothesize what is happening in the world to explain why the light was not working.
  • An intelligent tutoring system could try to explain why a student is giving some answer in terms of what the student understands and does not understand.
  • The term abduction was coined by Peirce (1839–1914) to differentiate this type of reasoning from deduction, which is determining what logically follows from a set of axioms, and induction, which is inferring general relationships from examples.
consistency based vs abductive diagnosis
Consistency-based vs. Abductive Diagnosis

Determining what is going on inside a system based on observations about the behavior is the problem of diagnosis or recognition.

  • In abductive diagnosis, the agent hypothesizes diseases and malfunctions, as well as that some parts are working normally, in order to explain the observed symptoms.
  • This differs from consistency-based diagnosis (page 187) in that the designer models faulty behavior as well as normal behavior, and the observations are explained rather than added to the knowledge base.
  • Abductive diagnosis requires more detailed modeling and gives more detailed diagnoses, as the knowledge base has to be able to actually prove the observations.
  • It also allows an agent to diagnose systems where there is no normal behavior. For example, in an intelligent tutoring system, observing what a student does, the tutoring system can hypothesize what the student understands and does not understand, which can the guide the action of the tutoring system.
example of abductive diagnosis
Example of Abductive Diagnosis

In abductive diagnosis, we need to axiomatize what follows from faults as well as from normality assumptions. For each atom that could be observed, we axiomatize how it could be produced.

This could be seen in design terms as a way to make sure the light is on: put both switches up or both switches down, and ensure the switches all work. It could also be seen as a way to determine what is going on if the agent observed l1 is lit: one of these two scenarios must hold.

inference procedures for abduction
Inference Procedures for Abduction

The bottom-up and top-down implementations for assumption-based reasoning with Horn clauses (page 190) can both be used for abduction.

  • The bottom-up implementation of Figure 5.9 (page 190) computes, in C, the minimal explanations for each atom. Instead of returning {A: <false, A> in C}, return the set of assumptions for each atom. The pruning of supersets of assumptions discussed in the text can also be used.
  • The top-down implementation can be used to find the explanations of any g by generating the conflicts, and, using the same code and knowledge base, proving g instead of false. The minimal explanations of g are the minimal sets of assumables collected to prove g that are not subsets of conflicts.
causal models
Causal Models

There are many decisions the designer of an agent needs to make when designing knowledge base for a domain. For example, consider two propositions a and b, both of which are true. There are many choices of how to write this.

  • A designer could specify have both a and b as atomic clauses, treating both as primitive.
  • A designer could have a as primitive and b as derived, stating a as an atomic clause and giving the rule b<-a.
  • Alternatively, the designer could specify the atomic clause b and the rule a<-b, treating b as primitive and a as derived.
  • These representations are logically equivalent; they cannot be distinguished logically. However, they have different effects when the knowledge base is changed. Suppose a was no longer true for some reason. In the first and third representations, b would still be true and in the second representation b would no longer true.
  • A causal model is a representation of a domain that predicts the results of interventions. An intervention is an action that forces a variable to have a particular value.
causal vs evidential models
Causal vs. Evidential Models

In order to predict the effect of interventions, a causal model represents how the cause implies its effect. When the cause is changed, its effect should be changed. An evidential model represents a domain in the other direction,from effect to cause.