L. M. Pereira and A. M. Pinto
This presentation is the property of its rightful owner.
Sponsored Links
1 / 15

Approved Models for Normal Logic Programs Luís Moniz Pereira and Alexandre Miguel Pinto PowerPoint PPT Presentation


  • 73 Views
  • Uploaded on
  • Presentation posted in: General

Approved Models for Normal Logic Programs Luís Moniz Pereira and Alexandre Miguel Pinto Centre for Artificial Intelligence Universidade Nova de Lisboa. Approved Models for Normal Logic Programs. Motivation Notation The Argumentation Perspective Our Argumentation Program Layering

Download Presentation

Approved Models for Normal Logic Programs Luís Moniz Pereira and Alexandre Miguel Pinto

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Approved models for normal logic programs lu s moniz pereira and alexandre miguel pinto

L. M. Pereira and A. M. Pinto

Approved Models forNormal Logic Programs

Luís Moniz Pereira and Alexandre Miguel Pinto

Centre for Artificial Intelligence

Universidade Nova de Lisboa


Approved models for normal logic programs

L. M. Pereira and A. M. Pinto

Approved Models forNormal Logic Programs

  • Motivation

  • Notation

  • The Argumentation Perspective

  • Our Argumentation

  • Program Layering

  • Collaborative Argumentation

  • Properties

  • Conclusions and Future Work


Motivation

L. M. Pereira and A. M. Pinto

Motivation

  • Generalize the Argumentation Perspective to all Normal Logic Programs (NLP) by permitting inconsistency removal

  • Allow revising arguments by Reductio ad Absurdum (RAA)

  • Identify the 2-valued complete, consistent, and most skeptical models of any NLP

  • Identify those models respecting the layered stratification of a program


Notation and example

L. M. Pereira and A. M. Pinto

Notation and Example

  • An NLP is a set of rules of the form

    h ← b1, ..., bn, not c1, ..., not cm

    'not' denotes default negation

  • Ex: intend_to_invade ← iran_will_have_WMD

    iran_will_have_WMD ← not intend_to_invade

  • An argument A is a set of negative hypotheses (default literals). Above, the argument {not intend_to_invade} attacks itself, i.e., leads to the conclusion intend_to_invade, and so cannot be accepted

  • This program has no Stable Models


The argumentation perspective

L. M. Pereira and A. M. Pinto

The Argumentation Perspective

  • Though {not intend_to_invade} cannot be accepted, by applying RAA in a 2-valued setting, its contrary intend_to_invademust be true

  • For 2-valued completeness and consistency iran_will_have_WMD is false

  • In general, using an RAA-inclusive Argumentation Perspective, how to specify and find 2-valued complete, consistent and most skeptical models?


Received wisdom

L. M. Pereira and A. M. Pinto

Received Wisdom

  • Classically, an Admissible Argument is such that:

    • it does not attack itself

    • it counter-attacks all arguments attacking it

  • Dung's Preferred Extensions are set-inclusion Maximal Admissible Arguments

  • In general, Preferred Extensions are 3-valued

    • in the example above the only Preferred Extension is the empty argument {}, yielding a 3-valued model whose literals are all undefined

  • There are no 2-valued Classical Arguments for all NLPs !


Our argumentation

L. M. Pereira and A. M. Pinto

Our Argumentation

  • We wish to provide an Argumentation Perspective where all NLPs have a 2-valued semantics based on a 2-valued Argument

  • Dung's 2-valued Arguments for NLPs correspond exactly to their Stable Models (SMs)

  • By completing Dung's Arguments via RAA, we obtain conservative 2-valued extensions for the SMs of any NLP


Approved models ams

L. M. Pereira and A. M. Pinto

Approved Models (AMs)

  • Our approach allows adding positive literals as argument hypotheses, but only insofar as to settle RAA application

  • Positive hypotheses resolve the Odd Loops Over Negation (OLONs) addressed by RAA. Similarly, they resolve the Infinite Chains Over Negation (ICONs) too

  • Intuitively, AMs are 2-valued, maximize default literals and minimally add positive literals so as to be complete

  • AMs without positive literals are the SMs


Top down querying

Top-down querying

  • When top-down querying we can detect OLONs ”on-the-fly” and resolve them with RAA

  • SM cannot employ top-down query procedures because the semantics is not Relevant, but our extension to SM permits them because it is so

  • A query literal is supported by the arguments found in its top-down derivation

  • Relevancy of AM guarantees that any supporting arguments are extendable to a complete model


Icons

L. M. Pereira and A. M. Pinto

ICONs

An ICON:

p(X) ← p(s(X))

p(X) ← not p(s(X))

Ground version:

p(0) ← p(s(0))

p(0) ← not p(s(0))

p(s(0)) ← p(s(s(0)))

p(s(0)) ← not p(s(s(0)))

......

  • Approved Models (AMs):

    • {p(X)}

  • Ground Approved Models:

    • {p(0), p(s(0)), p(s(s(0))),...}

  • This program has no Stable Models!


Program layering

L. M. Pereira and A. M. Pinto

Program Layering

  • Example:

    • d ← not c

    • c ← not b

    • b ← not a

    • a ← not a

  • Approved Models

  • (the first is an RSM):

    • {a,c} {a,b,d} – Given a, then b is false in the WFM

  • There are no Stable Models

    • The Approved Models do not necessarily respect the Layering (≠ from stratification)

    • Respect of Layering is an optional further requirement

    • The complying Approved Models are the Revised Stable Models


    Program layering1

    L. M. Pereira and A. M. Pinto

    Program Layering

    WFM = < WFM+, WFMu, WFM- >

    • Program division P // I by interpretation I:

      • remove from P rules with not a in body, where aI

      • remove from bodies of remaining rules positive literals aI

    • M respects the Layering of P iff

      given some aM let L={bM: b is in the call-graph of a but not vice-versa};

      then a is True or Undefined in the WFM of P // L


    Collaborative argumentation

    L. M. Pereira and A. M. Pinto

    Collaborative Argumentation

    • Collaborative Argumentation caters for consensus arguments wrt an NLP

    • Our approach enables it, e.g.:

      • merge arguments into one – possibly self-attacking

      • build AMs from it by non-deterministically revising (to positive) negative hypotheses leading to self-attacks

      • An AM is found when a negative maximal and positive minimal argument is reached


    Properties

    L. M. Pereira and A. M. Pinto

    Properties

    • AMs are consistent 2-valued completions of Preferred Extensions

    • AM existence is guaranteed for NLPs

    • AM is Relevant (bonus: and Cumulative)

    • Layer respecting AMs are the Revised SMs

    • AMs, RSMs and SMs coincide on programs with neither OLONs nor ICONs


    Conclusions and future work

    L. M. Pereira and A. M. Pinto

    Conclusions and Future Work

    • Argumentation approach provides general flexible framework

    • The framework can adumbrate seamlessly other cases of inconsistency, namely arising from Integrity Constraints and Explicit Negation, and thus encompass (collaborative) Belief Revision

    • Results could be generalized to Argumentation settings not specific to Logic Programs, keeping to the Occam precept, i.e., skepticism maximizing negative assumptions with the help of minimal positive ones


  • Login