- 77 Views
- Uploaded on
- Presentation posted in: General

Approved Models for Normal Logic Programs Luís Moniz Pereira and Alexandre Miguel Pinto

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

L. M. Pereira and A. M. Pinto

Approved Models forNormal Logic Programs

Luís Moniz Pereira and Alexandre Miguel Pinto

Centre for Artificial Intelligence

Universidade Nova de Lisboa

L. M. Pereira and A. M. Pinto

- Motivation
- Notation
- The Argumentation Perspective
- Our Argumentation
- Program Layering
- Collaborative Argumentation
- Properties
- Conclusions and Future Work

L. M. Pereira and A. M. Pinto

- Generalize the Argumentation Perspective to all Normal Logic Programs (NLP) by permitting inconsistency removal
- Allow revising arguments by Reductio ad Absurdum (RAA)
- Identify the 2-valued complete, consistent, and most skeptical models of any NLP
- Identify those models respecting the layered stratification of a program

L. M. Pereira and A. M. Pinto

- An NLP is a set of rules of the form
h ← b1, ..., bn, not c1, ..., not cm

'not' denotes default negation

- Ex: intend_to_invade ← iran_will_have_WMD
iran_will_have_WMD ← not intend_to_invade

- An argument A is a set of negative hypotheses (default literals). Above, the argument {not intend_to_invade} attacks itself, i.e., leads to the conclusion intend_to_invade, and so cannot be accepted
- This program has no Stable Models

L. M. Pereira and A. M. Pinto

- Though {not intend_to_invade} cannot be accepted, by applying RAA in a 2-valued setting, its contrary intend_to_invademust be true
- For 2-valued completeness and consistency iran_will_have_WMD is false
- In general, using an RAA-inclusive Argumentation Perspective, how to specify and find 2-valued complete, consistent and most skeptical models?

L. M. Pereira and A. M. Pinto

- Classically, an Admissible Argument is such that:
- it does not attack itself
- it counter-attacks all arguments attacking it

- Dung's Preferred Extensions are set-inclusion Maximal Admissible Arguments
- In general, Preferred Extensions are 3-valued
- in the example above the only Preferred Extension is the empty argument {}, yielding a 3-valued model whose literals are all undefined

- There are no 2-valued Classical Arguments for all NLPs !

L. M. Pereira and A. M. Pinto

- We wish to provide an Argumentation Perspective where all NLPs have a 2-valued semantics based on a 2-valued Argument
- Dung's 2-valued Arguments for NLPs correspond exactly to their Stable Models (SMs)
- By completing Dung's Arguments via RAA, we obtain conservative 2-valued extensions for the SMs of any NLP

L. M. Pereira and A. M. Pinto

- Our approach allows adding positive literals as argument hypotheses, but only insofar as to settle RAA application
- Positive hypotheses resolve the Odd Loops Over Negation (OLONs) addressed by RAA. Similarly, they resolve the Infinite Chains Over Negation (ICONs) too
- Intuitively, AMs are 2-valued, maximize default literals and minimally add positive literals so as to be complete
- AMs without positive literals are the SMs

- When top-down querying we can detect OLONs ”on-the-fly” and resolve them with RAA
- SM cannot employ top-down query procedures because the semantics is not Relevant, but our extension to SM permits them because it is so
- A query literal is supported by the arguments found in its top-down derivation
- Relevancy of AM guarantees that any supporting arguments are extendable to a complete model

L. M. Pereira and A. M. Pinto

An ICON:

p(X) ← p(s(X))

p(X) ← not p(s(X))

Ground version:

p(0) ← p(s(0))

p(0) ← not p(s(0))

p(s(0)) ← p(s(s(0)))

p(s(0)) ← not p(s(s(0)))

......

- Approved Models (AMs):
- {p(X)}

- Ground Approved Models:
- {p(0), p(s(0)), p(s(s(0))),...}

- This program has no Stable Models!

L. M. Pereira and A. M. Pinto

- Example:
- d ← not c
- c ← not b
- b ← not a
- a ← not a

- {a,c} {a,b,d} – Given a, then b is false in the WFM

- The Approved Models do not necessarily respect the Layering (≠ from stratification)
- Respect of Layering is an optional further requirement
- The complying Approved Models are the Revised Stable Models

L. M. Pereira and A. M. Pinto

WFM = < WFM+, WFMu, WFM- >

- Program division P // I by interpretation I:
- remove from P rules with not a in body, where aI
- remove from bodies of remaining rules positive literals aI

- M respects the Layering of P iff
given some aM let L={bM: b is in the call-graph of a but not vice-versa};

then a is True or Undefined in the WFM of P // L

L. M. Pereira and A. M. Pinto

- Collaborative Argumentation caters for consensus arguments wrt an NLP
- Our approach enables it, e.g.:
- merge arguments into one – possibly self-attacking
- build AMs from it by non-deterministically revising (to positive) negative hypotheses leading to self-attacks
- An AM is found when a negative maximal and positive minimal argument is reached

L. M. Pereira and A. M. Pinto

- AMs are consistent 2-valued completions of Preferred Extensions
- AM existence is guaranteed for NLPs
- AM is Relevant (bonus: and Cumulative)
- Layer respecting AMs are the Revised SMs
- AMs, RSMs and SMs coincide on programs with neither OLONs nor ICONs

L. M. Pereira and A. M. Pinto

- Argumentation approach provides general flexible framework
- The framework can adumbrate seamlessly other cases of inconsistency, namely arising from Integrity Constraints and Explicit Negation, and thus encompass (collaborative) Belief Revision
- Results could be generalized to Argumentation settings not specific to Logic Programs, keeping to the Occam precept, i.e., skepticism maximizing negative assumptions with the help of minimal positive ones