Loading in 5 sec....

Inconsistency Tolerance in SNePSPowerPoint Presentation

Inconsistency Tolerance in SNePS

- 267 Views
- Uploaded on
- Presentation posted in: Pets / Animals

Inconsistency Tolerance in SNePS

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Inconsistency Tolerance in SNePS

Stuart C. Shapiro

Department of Computer Science and Engineering,

and Center for Cognitive Science

University at Buffalo, The State University of New York

201 Bell Hall, Buffalo, NY 14260-2000

shapiro@cse.buffalo.edu

http://www.cse.buffalo.edu/~shapiro/

- João Martins
- Frances L. Johnson
- Bharat Bhushan
- The SNePS Research Group
- NSF, Instituto Nacional de Investigação Cientifica, Rome Air Development Center, AFOSR, U.S. Army CECOM

S. C. Shapiro

- Introduction
- Some Rules of Inference
- ~I and Belief Revision
- Credibility Ordering and Automatic BR
- Reasoning in Different Contexts
- Default Reasoning by Preferential Ordering
- Summary

S. C. Shapiro

- A logic- and network-based
- Knowledge representation
- Reasoning
- And acting
- System [Shapiro & Group ’02]
This talk will ignore network and acting aspects.

S. C. Shapiro

- Based on R, the logic of relevant implication
[Anderson & Belnap ’75; Martins & Shapiro ’88, Shapiro ’92]

S. C. Shapiro

P{… <origin tag, origin set> …}

Set of hypotheses

From which P

has been derived.

hyphypothesis

derderived

Origin set tracks relevance and ATMS assumptions.

S. C. Shapiro

- Introduction
- Some Rules of Inference
- ~I and Belief Revision
- Credibility Ordering and Automatic BR
- Reasoning in Different Contexts
- Default Reasoning by Preferential Ordering
- Summary

S. C. Shapiro

Hyp: P {<hyp,{P}>}

: whale(Willy) and free(Willy). wff3: free(Willy) and whale(Willy) {<hyp,{wff3}>}

S. C. Shapiro

&E: From A and B {<t,s>}

infer A {<der,s>} or B {<der,s>}

wff3: free(Willy) and whale(Willy) {<hyp,{wff3}>}

: free(Willy)? wff2: free(Willy) {<der,{wff3}>}

S. C. Shapiro

The os is the union of os's of parents

wff3: free(Willy) and whale(Willy) {<hyp,{wff3}>}

wff6:all(x)(andor(0,1){manatee(x), dolphin(x), whale(x)})

{<hyp,{wff6}>}: dolphin(Willy)?

wff9: ~dolphin(Willy) {<der,{wff3,wff6}>}

At most 1

S. C. Shapiro

The origin set is the union of os's of parents.

Since wff10: all(x)(whale(x) => mammal(x)) {<hyp,{wff10}>}

and wff1: whale(Willy){<der,{wff3}>}

I infer wff11: mammal(Willy) {<der,{wff3,wff10}>}

S. C. Shapiro

origin set is diff of os's of parents.

wff12: all(x)(orca(x) => whale(x)) {<hyp,{wff12}>}

: orca(Keiko) => mammal(Keiko)?

Let me assume that wff13: orca(Keiko) {<hyp,{wff13}>}

Since wff12: all(x)(orca(x) => whale(x)) {<hyp,{wff12}>}and wff13: orca(Keiko){<hyp,{wff13}>}

I infer whale(Keiko) {<der,{wff12,wff13}>}

S. C. Shapiro

origin set is diff of os's of parents.

Since wff10: all(x)(whale(x) => mammal(x)) {<hyp,{wff10}>}

and wff16: whale(Keiko) {<der,{wff12,wff13}>}I infer mammal(Keiko) {<der,{wff10,wff12,wff13}>}

Since wff14: mammal(Keiko) {<der,{wff10,wff12,wff13}>}was derived assuming

wff13: orca(Keiko) {<hyp,{wff13}>}I infer

wff15: orca(Keiko) => mammal(Keiko) {<der,{wff10,wff12}>}

S. C. Shapiro

- Introduction
- Some Rules of Inference
- ~I and Belief Revision
- Credibility Ordering and Automatic BR
- Reasoning in Different Contexts
- Default Reasoning by Preferential Ordering
- Summary

S. C. Shapiro

- ~I triggered when a contradiction is derived.
- Proposition to be negated must be one of the hypotheses underlying the contradiction.
- Origin set is the rest of the hypotheses.
- SNeBR [Martins & Shapiro ’88] involved in choosing the culprit.

S. C. Shapiro

wff19: all(x)(whale(x) => fish(x)){<hyp,{wff19}>}

wff20: all(x)(andor(0,1){mammal(x), fish(x)})

{<hyp,{wff20}>}

wff21: all(x)(fish(x) <=> has(x,scales))

{<hyp,{wff21}>}

S. C. Shapiro

: has(Willy, scales)?

Since wff19: all(x)(whale(x) => fish(x)) {<hyp,{wff19}>}

and wff1: whale(Willy) {<der,{wff3}>}I infer fish(Willy) {<der,{wff3,wff19}>}

Since wff21: all(x)(fish(x) <=> has(x,scales))

{<hyp,{wff21}>}

and wff23: fish(Willy) {<der,{wff3,wff19}>}I infer has(Willy,scales) {<der,{wff3,wff19,wff21}>}

Since wff20:

all(x)(andor(0,1){mammal(x), fish(x)})

{<hyp,{wff20}>}

and wff11: mammal(Willy) {<der,{wff3,wff10}>}I infer it is not the case that wff23: fish(Willy)

S. C. Shapiro

A contradiction was detected

within context default-defaultct. The contradiction involves the newly derived proposition: wff24: ~fish(Willy) {<der,{wff3,wff10,wff20}>} and the previously existing proposition: wff23: fish(Willy) {<der,{wff3,wff19}>} You have the following options: 1. [c]ontinue anyway, knowing that a contradiction is derivable; 2. [r]e-start the exact same run in a different context which is not inconsistent; 3. [d]rop the run altogether. (please type c, r or d)=><= r

S. C. Shapiro

In order to make the context consistent you must delete at least one hypothesis from the set listed below.

This set of hypotheses is known to be inconsistent:

1 : wff20: all(x)(andor(0,1){mammal(x),fish(x)})

{<hyp,{wff20}>} (1 dependent proposition: (wff24))

2 : wff19: all(x)(whale(x) => fish(x)) {<hyp,{wff19}>} (2 dependent propositions: (wff23 wff22))

3 : wff10: all(x)(whale(x) => mammal(x)){<hyp,{wff10}>} (3 dependent propositions: (wff24 wff15 wff11))

4 : wff3: free(Willy) and whale(Willy) {<hyp,{wff3}>} (8 dependent propositions:

(wff24 wff23 wff22 wff11 wff9 wff5 wff2 wff1))

User deletes #2: wff19.

S. C. Shapiro

Since wff21: all(x)(fish(x) <=> has(x,scales))

{<hyp,{wff21}>}

and it is not the case that wff23: fish(Willy)

{<der,{wff3,wff19}>}

I infer it is not the case that

wff22: has(Willy,scales) {<der,{wff3,wff19,wff21}>}

wff26: ~has(Willy,scales){<der,{wff3,wff10,wff20,wff21}>}

S. C. Shapiro

: list-asserted-wffs

wff3: free(Willy) and whale(Willy) {<hyp,{wff3}>}

wff6: all(x)(andor(0,1){manatee(x),dolphin(x),whale(x)})

{<hyp,{wff6}>}

wff10: all(x)(whale(x) => mammal(x)) {<hyp,{wff10}>}

wff12: all(x)(orca(x) => whale(x)) {<hyp,{wff12}>}

wff20: all(x)(andor(0,1){mammal(x),fish(x)}) {<hyp,{wff20}>}

wff21: all(x)(fish(x) <=> has(x,scales)) {<hyp,{wff21}>}

wff1: whale(Willy) {<der,{wff3}>}

wff2: free(Willy) {<der,{wff3}>}

wff11: mammal(Willy) {<der,{wff3,wff10}>}

wff15: orca(Keiko) => mammal(Keiko) {<der,{wff10,wff12}>}

S. C. Shapiro

: list-asserted-wffs

wff3: free(Willy) and whale(Willy) {<hyp,{wff3}>}

wff6: all(x)(andor(0,1){manatee(x),dolphin(x),whale(x)})

{<hyp,{wff6}>}

wff10: all(x)(whale(x) => mammal(x)) {<hyp,{wff10}>}

wff12: all(x)(orca(x) => whale(x)) {<hyp,{wff12}>}

wff20: all(x)(andor(0,1){mammal(x),fish(x)}) {<hyp,{wff20}>}

wff21: all(x)(fish(x) <=> has(x,scales)) {<hyp,{wff21}>}

wff9: ~dolphin(Willy) {<der,{wff3,wff10}>}

wff24: ~fish(Willy) {<der,{wff3,wff10,wff20}>}

wff25: ~(all(x)(whale(x) => fish(x))) {<ext,{wff3,wff10,wff20}>}

wff26: ~has(Willy,scales) {<der,{wff3,wff10,wff20,wff21}>}

S. C. Shapiro

- Logic is paraconsistent:
P{<t1, {h1 … hi}>},

~P{<t2, {h(i+1) … hn}>}

~hj

- When a contradiction is explicitly found, the user is engaged in its resolution.

S. C. Shapiro

- Introduction
- Some Rules of Inference
- ~I and Belief Revision
- Credibility Ordering and Automatic BR
- Reasoning in Different Contexts
- Default Reasoning by Preferential Ordering
- Summary

S. C. Shapiro

- Hypotheses may be given sources.
- Sources may be given relative credibility.
- Hypotheses inherit relative credibility from sources.
- Hypotheses may be given relative credibility directly. (Not shown.)
- SNeBR may use relative credibility to choose a culprit by itself. [Shapiro & Johnson ’00]
*Not yet in released version.

S. C. Shapiro

wff1: all(x)(andor(0,1){mammal(x),fish(x)}) {<hyp,{wff1}>}

wff2: all(x)(fish(x) <=> has(x,scales)) {<hyp,{wff2}>}

wff3: all(x)(orca(x) => whale(x)) {<hyp,{wff3}>}

: Source(Melville, all(x)(whale(x) => fish(x)).).

wff5: Source(Melville,all(x)(whale(x) => fish(x)))

{<hyp,{wff5}>}

: Source(Darwin, all(x)(whale(x) => mammal(x)).).

wff7: Source(Darwin,all(x)(whale(x) => mammal(x)))

{<hyp,{wff7}>}: Sgreater(Darwin, Melville). wff8: Sgreater(Darwin,Melville) {<hyp,{wff8}>}

wff11: free(Willy) and whale(Willy) {<hyp,{wff11}>}

Note: Source & Sgreater props are regular object-language props.

S. C. Shapiro

: has(Willy, scales)?Since wff4: all(x)(whale(x) => fish(x)) {<hyp,{wff4}>}and wff9: whale(Willy) {<der,{wff11}>}I infer fish(Willy) {<der,{wff4,wff11}>}

Since wff2: all(x)(fish(x) <=> has(x,scales)) {<hyp,{wff2}>}and wff14: fish(Willy) {<der,{wff4,wff11}>}I infer has(Willy,scales)Since wff6: all(x)(whale(x) => mammal(x)) {<hyp,{wff6}>}and wff9: whale(Willy) {<der,{wff11}>}I infer mammal(Willy)Since wff1: all(x)(andor(0,1){mammal(x),fish(x)})

{<hyp,{wff1}>}and wff15: mammal(Willy) {<der,{wff6,wff11}>}I infer it is not the case that

wff14: fish(Willy) {<der,{wff4,wff11}>}

S. C. Shapiro

A contradiction was detected within context default-defaultct.

The contradiction involves the newly derived proposition: wff17: ~fish(Willy) {<der,{wff1,wff6,wff11}>}

and the previously existing proposition: wff14: fish(Willy) {<der,{wff4,wff11}>}

The least believed hypothesis: (wff4)

The most common hypothesis: (nil)

The hypothesis supporting the fewest wffs: (wff1)

I removed the following belief: wff4: all(x)(whale(x) => fish(x)) {<hyp,{wff4}>}

I no longer believe the following 2 propositions: wff14: fish(Willy) {<der,{wff4,wff11}>} wff13: has(Willy,scales) {<der,{wff2,wff4,wff11}>}

S. C. Shapiro

- User may select automatic BR.
- Relative credibility is used.
- User is informed of lost beliefs.

S. C. Shapiro

- Introduction
- Some Rules of Inference
- ~I and Belief Revision
- Credibility Ordering and Automatic BR
- Reasoning in Different Contexts
- Default Reasoning by Preferential Ordering
- Summary

S. C. Shapiro

- A context is a set of hypotheses and all propositions derived from them.
- Reasoning is performed within a context.
- A conclusion is available in every context that is a superset of its origin set. [Martins & Shapiro ’83]
- Contradictions across contexts are not noticed.

S. C. Shapiro

: set-context Darwin ()

: set-default-context Darwin

wff1: all(x)(andor(0,1){mammal(x),fish(x)})

{<hyp,{wff1}>}

wff2: all(x)(fish(x) <=> has(x,scales)) {<hyp,{wff2}>}

wff3: all(x)(orca(x) => whale(x)) {<hyp,{wff3}>}

wff4: all(x)(whale(x) => mammal(x)) {<hyp,{wff4}>}

wff7: free(Willy) and whale(Willy) {<hyp,{wff7}>}

S. C. Shapiro

: describe-context((assertions (wff8 wff7 wff4 wff3 wff2 wff1)) (restriction nil) (named (science))): set-context Melville (wff8 wff7 wff3 wff2 wff1)((assertions (wff8 wff7 wff3 wff2 wff1)) (restriction nil) (named (melville))): set-default-context Melville((assertions (wff8 wff7 wff3 wff2 wff1)) (restriction nil) (named (melville))): all(x)(whale(x) => fish(x)). wff9: all(x)(whale(x) => fish(x)) {<hyp,{wff9}>}

S. C. Shapiro

: has(Willy, scales)?Since wff9: all(x)(whale(x) => fish(x)){<hyp,{wff9}>}and wff5: whale(Willy) {<der,{wff7}>}I infer fish(Willy) {<der,{wff7,wff9}>}

Since wff2: all(x)(fish(x) <=> has(x,scales))

{<hyp,{wff2}>}and wff11: fish(Willy) {<der,{wff7,wff9}>}I infer has(Willy,scales) {<der,{wff2,wff7,wff9}>}

wff10: has(Willy,scales) {<der,{wff2,wff7,wff9}>}

S. C. Shapiro

: set-default-context Darwin

: has(Willy, scales)?Since wff4: all(x)(whale(x) => mammal(x)) {<hyp,{wff4}>}and wff5: whale(Willy) {<der,{wff7}>}I infer mammal(Willy)Since wff1: all(x)(andor(0,1){mammal(x),fish(x)})

{<hyp,{wff1}>}and wff12: mammal(Willy) {<der,{wff4,wff7}>}I infer it is not the case that wff11: fish(Willy)

Since wff2: all(x)(fish(x) <=> has(x,scales)) {<hyp,{wff2}>}and it is not the case that wff11: fish(Willy)

{<der,{wff7,wff9}>}I infer it is not the case that wff10: has(Willy,scales)

wff15: ~has(Willy,scales) {<der,{wff1,wff2,wff4,wff7}>}

S. C. Shapiro

- Contradictory information may be isolated in different contexts.
- Reasoning is performed in a single context.
- Results are available in other contexts.

S. C. Shapiro

- Introduction
- Some Rules of Inference
- ~I and Belief Revision
- Credibility Ordering and Automatic BR
- Reasoning in Different Contexts
- Default Reasoning by Preferential Ordering
- Summary

S. C. Shapiro

- No special syntax for default rules.
- If P and ~P are derived
- but argument for one is undercut by an argument for the other
- don’t believe the undercut conclusion.

- Unlike BR, believe the hypotheses, but not a conclusion.
[Grosof ’97, Bhushan ’03]

S. C. Shapiro

- P undercuts ~P if
- Precludes(P, ~P) or
- Every origin set of ~P has some hyp h such that there is some hyp q in an origin set of P such that Precludes(q, h).

- Precludes(P, Q) is a proposition like any other.
*Not yet in released version.

S. C. Shapiro

wff1: all(x)(orca(x) => whale(x))

wff2: all(x)(whale(x) => mammal(x))

wff3: all(x)(deer(x) => mammal(x))

wff4: all(x)(tuna(x) => fish(x))

wff5: all(x)(canary(x) => bird(x))

wff6: all(x)(penguin(x) => bird(x))

wff7: all(x)(andor(0,1){swims(x),flies(x),runs(x)})

wff8: all(x)(mammal(x) => runs(x))

wff9: all(x)(fish(x) => swims(x))

wff10: all(x)(bird(x) => flies(x))

wff11: all(x)(whale(x) => swims(x))

wff12: all(x)(penguin(x) => swims(x))

S. C. Shapiro

wff13: Precludes(all(x)(whale(x) => swims(x)),

all(x)(mammal(x) => runs(x)))

wff14: Precludes(all(x)(penguin(x) => swims(x)),

all(x)(bird(x) => flies(x)))

wff15: orca(Willy)

wff16: tuna(Charlie)

wff17: deer(Bambi)

wff18: canary(Tweety)

wff19: penguin(Opus)

S. C. Shapiro

: swims(?x)?

I infer swims(Opus)

I infer swims(Charlie)

I infer swims(Willy)

I infer flies(Tweety)

I infer it is not the case that swims(Tweety)

I infer flies(Opus)

I infer it is not the case that wff20: swims(Opus)

I infer runs(Willy)

I infer it is not the case that wff24: swims(Willy)

I infer runs(Bambi)

I infer it is not the case that swims(Bambi)

S. C. Shapiro

Since wff13: Precludes(all(x)(whale(x) => swims(x)),

all(x)(mammal(x) => runs(x)))

and wff11: all(x)(whale(x) => swims(x)) {<hyp,{wff11}>}

holds within the BS defined by context default-defaultct

Therefore wff34: ~swims(Willy)

containing in its support

wff8: all(x)(mammal(x) => runs(x))

is precluded by wff24: swims(Willy)

that contains in its support

wff11:all(x)(whale(x) => swims(x))

S. C. Shapiro

Since wff14: Precludes(all(x)(penguin(x) => swims(x)),

all(x)(bird(x) => flies(x)))

and wff12: all(x)(penguin(x) => swims(x))

holds within the BS defined by context default-defaultct

Therefore wff31: ~swims(Opus)

containing in its support

wff10:all(x)(bird(x) => flies(x))

is precluded by wff20: swims(Opus)

that contains in its support

wff12: all(x)(penguin(x) => swims(x))

S. C. Shapiro

wff38: ~swims(Bambi) {<der,{wff3,wff7,wff8,wff17}>}

wff28: ~swims(Tweety) {<der,{wff5,wff7,wff10,wff18}>}

wff24: swims(Willy) {<der,{wff1,wff11,wff15}>}

wff22: swims(Charlie) {<der,{wff4,wff9,wff16}>}

wff20: swims(Opus) {<der,{wff12,wff19}>}

S. C. Shapiro

wff1: all(x)(robin(x) => bird(x))

wff2: all(x)(kiwi(x) => bird(x))

wff3: all(x)(bird(x) => flies(x))

wff4: all(x)(bird(x) => (~flies(x)))

wff5: all(x)(robin(x) => flies(x))

wff6: all(x)(kiwi(x) => (~flies(x)))

Example from Delgrande & Schaub ‘00

S. C. Shapiro

wff7: Precludes(all(x)(robin(x) => flies(x)),

all(x)(bird(x) => (~flies(x))))

wff8: Precludes(all(x)(kiwi(x) => (~flies(x))),

all(x)(bird(x) => flies(x)))

wff12: (~location(New Zealand))

=> Precludes(all(x)(bird(x) => flies(x)),

all(x)(bird(x) => (~flies(x))))

wff14: location(New Zealand)

=> Precludes(all(x)(bird(x) => (~flies(x))),

all(x)(bird(x) => flies(x)))

wff10: ~location(New Zealand)

wff15: Precludes(location(New Zealand),

~location(New Zealand))

S. C. Shapiro

wff16: robin(Robin)

wff17: kiwi(Kenneth)

wff18: bird(Betty)

: flies(?x)?

S. C. Shapiro

wff24: ~flies(Kenneth){<der,{wff6,wff17}>,

<der,{wff2,wff4,wff17}>,

<der,{wff2,wff4,wff6,wff17}>}

wff21: flies(Robin) {<der,{wff5,wff16}>,

<der,{wff1,wff3,wff16}>}

wff19: flies(Betty) {<der,{wff3,wff18}>}

S. C. Shapiro

: location("New Zealand").

wff9: location(New Zealand)

: flies(?x)?

wff24: ~flies(Kenneth) {<der,{wff6,wff17}>,

<der,{wff2,wff4,wff17}>,

<der,{wff2,wff4,wff6,wff17}>}

wff21: flies(Robin) {<der,{wff5,wff16}>,

<der,{wff1,wff3,wff16}>}

wff20: ~flies(Betty) {<der,{wff4,wff18}>}

S. C. Shapiro

- Contradictions may be handled by DR instead of by BR.
- Hypotheses retained; conclusion removed.
- DR uses preferential ordering among contradictory conclusions or among supporting hypotheses.
- Precludes forms object-language proposition that may be reasoned with or reasoned about.

S. C. Shapiro

- Introduction
- Some Rules of Inference
- ~I and Belief Revision
- Credibility Ordering and Automatic BR
- Reasoning in Different Contexts
- Default Reasoning by Preferential Ordering
- Summary

S. C. Shapiro

- Inconsistency across contexts is harmless.
- Inconsistency about unrelated topic is harmless.
- Explicit contradiction may be resolved by user.
- Explicit contradiction may be resolved by system using relative credibility of propositions or sources.
- Explicit contradiction may be resolved by system using preferential ordering of conclusions or hypotheses.

S. C. Shapiro

http://www.cse.buffalo.edu/sneps/

S. C. Shapiro

A. R. Anderson, A. R. and N. D. Belnap, Jr. (1975) Entailment Volume I (Princeton: Princeton University Press).

B. Bhushan (2003) Preferential Ordering of Beliefs for Default Reasoning, M.S. Thesis, Department of Computer Science and Engineering, State University of New York at Buffalo, Buffalo, NY.

J. P. Delgrande and T. Schaub (2000) The role of default logic in knowledge representation. In J. Minker, ed. Logic-Based Artificial Intelligence (Boston: Kluwer Academic Publishers) 107-126.

B. N. Grosof (1997) Courteous Logic Programs: Prioritized Conflict Handling for Rules, IBM Research Report RC 20836, revised.

S. C. Shapiro

J. P. Martins and S. C. Shapiro (1983) Reasoning in multiple belief spaces, Proc. Eighth IJCAI (Los Altos, CA: Morgan Kaufmann) 370-373.

J. P. Martins and S. C. Shapiro (1988) A model for belief revision, Artificial Intelligence 35, 25-79.

S. C. Shapiro (1992) Relevance logic in computer science. In A. R. Anderson, N. D. Belnap, Jr., M. Dunn, et al.Entailment Volume II (Princeton: Princeton University Press) 553-563.

S. C. Shapiro and The SNePS Implementation Group (2002) SNePS 2.6 User's Manual, Department of Computer Science and Engineering, University at Buffalo, The State University of New York, Buffalo, NY.

S. C. Shapiro and F. L. Johnson (2000) Automatic belief revision in SNePS. In C. Baral & M. Truszczyński, eds., Proc. 8th International Workshop on Non-Monotonic Reasoning.

S. C. Shapiro