1 / 25

Overview of Non-Monotonic Reasoning

Overview of Non-Monotonic Reasoning. Jacques Robin. Outline. Monotonic vs. Non-Monotonic Reasoning (NMR) Epistemic vs. ontological non-monotonicity Epistemic non-monotonic automated reasoning tasks Default Reasoning (DR) Negation As Failure (NAF) in General Logic Programming (GLP)

Download Presentation

Overview of Non-Monotonic Reasoning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Overview of Non-Monotonic Reasoning Jacques Robin

  2. Outline • Monotonic vs. Non-Monotonic Reasoning (NMR) • Epistemic vs. ontological non-monotonicity • Epistemic non-monotonic automated reasoning tasks • Default Reasoning (DR) • Negation As Failure (NAF) in General Logic Programming (GLP) • Abduction and Abductive Logic Programming (ALP) • Belief Revision (BR) • Ontological automated reasoning tasks • Truth-Maintenance (TM) • Belief Update (BU)

  3. Monotonic reasoning KB,NK,QML (KB |=MLQ)  (KB  NK |=MLQ) Non-Monotonic Reasoning (NMR) KB,NK,QNML (KB |=NMLQ)  (KB  NK |NMLQ) also called Defeasible Reasoning Monotonic vs. Non-monotonic Reasoning: Logical Perspective

  4. Retract Non-Monotonic Reasoning Monotonic vs. Non-monotonic Reasoning: Internal Agent Architecture Perspective Environment Sensors MonotonicReasoning Domain-Specific Knowledge Base Generic Domain-Independent Inference Engine Ask Tell Effectors

  5. Epistemic Non-Monotonicity • In a partially observable environment (static or dynamic) • Agents must make decisions (choose among alternative actions) that require knowledge that it does not currently have about the environment • They must thus make plausible but logically unsound assumptions (hypotheses) about properties of the environment needed to act but about which they have no certain knowledge • Their KB is thus partitioned between at least two plausibility levels: • Certain knowledge derived purely deductively from reliable sensors and certain encapsulated initial knowledge • Uncertain knowledge derived using at least one hypothetical reasoning step to fill knowledge gap needed to make one decision • When: • Uncertain knowledge p =u v, where p is an environment property and v its currently assumed value, • is contradicted by new certain knowledge p=c wcvpurely deductively derived from new knowledge obtained from reliable sensors and certain knowledge: • p=c wmust be inserted to the agent’s KB, • p =u vmust be retracted from the agent’s KB, • all other properties, p1 =u v1, ..., pn =u vn, that were derived using p =u vmust be retracted from the agent’s KB, • since one of the hypotheses on which their insertions relied on turned out to be invalid.

  6. Ontological Non-Monotonicity • In a dynamic environment (fully or partially observable) where some properties (fluents) of the environment change either spontaneously over time or as the result agents’ actions • When the value of a fluent f changes, e.g., from f = v to f = w, this change must be reflected in the agent knowledge base: • f = v must be retracted • f = w must be inserted • all fluents fi = vi, ..., fi+n = vi+n whose value was deduced based on f = v must also be retacted • and all fluents fj = w1, ..., fj+m = wi+m whose value can now be deduced based on f = w must be inserted

  7. Combined Epistemic and Ontological Non-Monotonicity • Agents in environments that are bothdynamic and partially observable need toperform both kinds of NMR:epistemic and ontological • Example: The Wumpus World • Ontological NMR involves: • after choosing action forward while loc(agent) =c cavern(2,1) • retract loc(agent) =c cavern(2,1) • insert loc(agent) =c cavern(3,1), visited(cavern(2,1)) =c true • Epistemic NMR involves: • after sensing no stench in cavern(3,1) • retract loc(wumpus) =u cavern(2,3), loc(wumpus) =u cavern(3,2) • insert loc(wumpus) =c cavern(2,3) , safety(cavern(3,2)) =c ok

  8. Epistemic NMR: Default Reasoning (DR) General Logic Programming (GLP) with Negation as Failure (NAF) Abduction and Abductive Logic Programming (ALP) Belief Revision (BR) Truth-Maintenance (TM) Ontological NMR: Truth-Maintenance (TM) Belief Update (BU) Epistemic vs. OntologicalNon-Monotonicity: Reasoning Tasks

  9. Default Reasoning (DR) • Extends deduction with certain knowledge and inference rules • with derivation of uncertain but plausible default assumption knowledge and inference rules • for environment properties not known with certainty but needed for decision making • Example DR inference rules: • Closed-World Assumption (CWA) • Inheritance with overriding • Classical DR knowledge example: KB: X (bird(X) > flies(X) // default knowledge  penguin(X)  flies(X)  pigeon(X)  bird(X)  penguin(X)  bird(X))  pigeon(valiant)  penguin(tux) KB |= flies(valiant)  flies(tux)

  10. Variety of Basis for Default Knowledge • Statistical:almost all objects of class C satisfy property P, e.g., almost all living mammals give birth to live offsprings • Group confidence: all known objects of class C satisfy property P, e.g., all known giant planet has large satellites • Prototypical: an object which is typical representative of class C satisfies property P, e.g., a pigeon, a typical bird, can fly • Normality: under normal circumstances, an object of class C satisfies property P, e.g., a healthy pigeon with both functioning wings can fly • Lack of contrary evidence:e.g., a person that is talking clearly, walking straight, neither excessively laughing nor behaving aggressively is not drunk • Inertia: an object of class C that satisfies property P at time t, will satisfy it at time t+n unless some event affect it, e.g., a book on a shelve will remain there until someone take it out or a strong earthquake occurs

  11. Negation As Failure (NAF): Connective of General/Normal Logic Programs (GLP/NLP) semantically different from classical negation  naf p is true iff attempt to prove p finitely fails under CWA whereas p true iff attempts to prove p finitely succeeds under OWA naf allowed in rule bodies, but not in rule heads e.g., h :- p, naf q, r. but never naf h :- p, q, r. Restricted form of default reasoning to derive negative conclusions Typical DR example in Prolog: flies(X) :- bird(X), naf abnormal(X). abnormal(X) :- penguin(X), bird(X). abnormal(X) :- ostrich(X), bird(X). bird(X) :- pigeon(X). bird(X) :- penguin(X). bird(X) :- ostrich(X). pigeon(valiant). penguin(tux). ?- flies(valiant) yes ?- flies(tux) no ?- GLP with NAF

  12. Abduction • Given: • I: full Intentional causal background knowledge, a generic theory linking causes to effects • O: full extensional knowledge of Observed effects • P: Partial extensional knowledge of causes of these observed effects • B: meta-knowledge of abductive Bias restricting the space S of possible missing causes of these observed effects and defining a preference partial order over S • Abduce hypothetical missing causes H of O such that: • H  P  I |= O, i.e., H fully explains O given P and I, and • B(H), H pertains to the restricted subset of acceptable and preferred missing causes of O defined by abductive bias B • Example: • I: D,G,S (grass(G)  rainedOn(G,D)  wet(G,D+1))  (grass(G)  sprinklerOn(G,D)  wet(G,D+1))  (grass(G)  shoe(S)  wet(G,D)  walk(G,S,D)  wet(S,D)) • O: wet(s,26) • P: walk(g,s,26)  grass(g)  shoe(s) • B: (D,G ((rainedOn(G,D)  sprinklerOn(G,D)  )  (rainedOn(G,D) » sprinklerOn(G,D)) • H: rainedOn(g,25)

  13. Abduction is NMR • Why? • New knowledge of previously unknown causes of observed effects, • may invalidate abductive hypothesis made on the causes of these effects • Example: • If P’ = P  sprinklerOn(g,25) • Then H’ =  = H \ {rainedOn(g,25)}

  14. Belief Revision (BR) • Automated reasoning task answering the following question: • How to revise current belief base Cb • to assimilate new belief N, or • to retract currently held belief O • while maintaining the consistency of the new revised belief base Rb ? • i.e., Rb Cb  N and Rb |  or Rb Cb \ O and Rb |  • 1st issue: KR language used to represent Cb, N, O and Rb

  15. BR: Belief Basesvs. Belief Sets • In almost all cases, belief base Cb contains not only extensional beliefs Cbe but also intentional beliefs Cbi • Thus implicit beliefs Cbd are logically, plausibilistically or probabilistically derivable from Cb albeit not physically stored in it i.e., (Cb Cbe Cbi ) |D (Cbs Cb \ Cbd) • Should “new” and “currently held” apply to Cb or Cbs ?i.e., NCb OCbor NCbs OCbs ? • Different, since it some cases: • (Cb | Cbs)  (Db Cb) (Db | Cbs )  ((Cb+n Cb N) | Cb+ns) ((Db+n Db N) | Db+ns) (Cb+ns Db+ns)  ((Cb\o Cb\ O) | Cb\os)  (Db\o Db\ O) | Db\os) (Cb\os Db\os) • e.g., Cb (p  q  ((p  q)  r  q)), Db  (p ((p  q)  r  q)), Cbs  Dbs  (p q  ((p  q)  r  q)  r),Cb\as  (q  ((p  q)  r  q)  r)  Db\as  ((p  q)  r  q)

  16. BR: Mild Revision vs.Severe Revision • Mild revision: Cb  N | , or Cb \ O |  • Then Rb Cb  N, or Rb Cb \ O • Severe revision: Cb  N | , or Cb \ O |  • Then new issue arises: which minimal set of further belief revisions to execute to restore consistency? • i.e., find M+, M- such that:Cb  N  M+ \ M- |  andm+, m- (((m+  M+)  (m-  M-))  ((Cb  N  m+ \ m-) | ))

  17. Postulates for Rational BR • Set of logical a set-theoretic requirements that must be verified by BR operators to model rationalBR • Postulate history summary: • 1988 original AGM postulates (Alchourrón, Gärdenfors and Makinson) • 1991 revised into KM postulates (Katsuno, Mendelzon) • 1997 revised into DP postulates (Darwiche and Pearl) • 2005 revised into JT postulates (Jin and Thielscher) • Many other related proposals in between • Postulate sets differ mainly in terms of: • The epistemological commitment of the KR language used to represent the beliefs: boolean, ternary, possibilistic, plausibilistic, probabilistic, ... • Whether single-step or iterated belief revision is considered; • Whether revision: • is limited to unconditional beliefs, i.e., in logic, atoms, e.g., r(f(X,c),g(d),Y,e), • or extends to conditional beliefs i.e., in logic, Horn clausese.g., p(h(X,Y,a),b))  q(Y)  r(f(X,c),g(d),Y,e)

  18. Truth Maintenance Systems (TMS) • TMS Architecture: • D: Deductive rule base, a conjunction of definite Horn clauses(c1 p11 ... p1q) ... (cp pp1 ... ppr) • I: Integrity constraint base, a conjunction of Horn clauses concluding false ( p1 ... pn) ... ( p1 ... pm) • A: Assumption base a conjunction of atomic formulae a1 ... ao • that do not unify with any deductive rule base conclusion,i.e., i,j 1io, 1jp, unif(ai,cj) • but are nonetheless currently assumed T, for they does not deductively lead to the violation of any integrity constraint, i.e., A  F  D  I |  • F: Fact base, • a conjunction of atomic formulae currently proven or assumed T,f1[a11 ... a1l] ... fu[au1 ... auw] • that unify with at least one deductive rule base conclusion,i.e., i,j 1io, 1jp, unif(ai,cj) • and where each formulae f is annotated by the conjunction of its justifications,i.e., the minimal set of assumptions that unified with the premises of the deductive rules whose chained firing concluded f • TMS Engine: • Implements a form of belief revision with Boolean logical instead of plausibilistic epistemological commitment • Can serve for both epistemic and ontological NMR

  19. TMS Engine Algorithm • Let A0 be the initial assumption set • Apply D on A0 to obtain first fact base F0, i.e., D  A0 |= F0 • Check whether new derived facts violate integrity constraints • If D  F0  I |= , then • use CDBJ (Conflict-Directed Back Jumping) to identify minimal subset A0  of A0 responsible for failure • update assumption base by retracting the elements of A0 , i.e., A1 = A0 \ A0  • update fact base by retracting the elements that where justified by an element of A0 , i.e., F1 = F0 \ {f[a1 ... al]F0 | 1il aiA0} • If D  F0  I | , then F1 = F0 • Return F1 • If new positive evidence becomes available,e.g., facts F2 from reliable agent’s sensor or user input knowledge, and/or assumptions A2 from user input hypotheses, then • apply D on F1 given A1 together with this new evidence,to obtain revised fact base F3, i.e., D  F1  A1  F2  A2 |= F3 • Go back to truth-maintenance steps 2-5 andreturn F4 resulting from applying these steps to F3 • If new negative evidence becomes available,e.g., knowledge that some currently made assumptions AIare now known to be incorrect or are no longer valid then return F5 = F4 \ {f[a1 ... al] F4| 1il aiAI}

  20. Belief Update (BU) • Belief Update vs. Belief Revision • Update: motivated by ontological non-monotonicity, i.e.,actual changes in the agent’s dynamic environment, resulting from agent’s actions or spontaneously occurring events; • Revision: motivated by epistemological non-monotonicity, i.e., changes in the agent’s amount and certainty of knowledge about its partiallyobservable or noisy environment. • Three main problems: • The frame problem • The ramification problem • The qualification problem • Each problem has two aspects: • A representational aspect, the design of a concise, space-efficient, yet sufficiently expressive knowledge representation language for the changing properties of the environment (fluents) • An inferential aspect, the design of an time-efficient yet sound and as complete as possible update procedure for the truth-value, plausibility or probability of each fluent

  21. The Frame Problem • General law of inertia: • Any given event (in particular an agent action) induce only very localchanges on the environment state, i.e., they affect only a tiny minority of all fluents describing this state; • All others, i.e.,almost all fluents remain unchanged by any given event; • Example: pick action in Wumpus World only changes the hasGold(agent) fluent, not affecting any other fluents such as in(agent,X,Y), hasArrow(agent), alive(agent), alive(wumpus), ... • Representational frame problem: how to design a KR language that exploits this locality to be concise and space-efficient? • Inferential frame problem: how devise time-efficient techniques to revise the changing fluents using this KR language

  22. The Frame Problem • Naive approach to representing fluent changes directly in Classical First-Order Logic (CFOL): • Precondition axioms that represent the required circumstances in which a given action is indeed executable • e.g.,A,O,T,X,Y poss(pick(A,O),T)  in(A,X,Y,T)  in(O,X,Y,T) • Direct effect axioms that represented the intended fluent changes of that same action • e.g., A,O,T poss(pick(A,O),T)  do(pick(A,O),T)  has(A,O,T+1) • CFOL’s OWA forces to need for additional frame axioms explicitly representing everything that the action does not change • e.g., (A,O,T poss(pick(A,O),T)  do(pick(A,O),T)  loc(A,X,Y,T)  loc(A,X,Y,T+1))  (A,O,O’, T poss(pick(A,O),T)  do(pick(A,O),T)  O  O’  has(A,O’,T)  has(A,O’,T+1))  (A,O,O’, T poss(pick(A,O),T)  do(pick(A,O),T)  O  O’  loc(O’,X,YT) loc(O’,X,YT+1))  (A,O,L,T poss(pick(A,O),T)  do(pick(A,O),T)  alive(L,T)  alive(L,T+1) ... • Representational frame problem: combinatorially explosive size of the representation • Inferential frame problem: exponential time to apply inertia law by propagation of such a large size frame axiom set after each event occurrence or action execution • Insight: all these unchanged fluents are independent of almost all events and actions • Frame axioms are wasteful because they fail to exploit this massive independence

  23. The Ramification Problem • A direct effect axiom is a piece of diachronic knowledge that captures only the intended effects of a given action,i.e., the reason why it was executed by an agent; • Those effects are changes to the truth value, plausibility or probability of environment fluents that matches the goal of the agent when it chose the action; • e.g., in situation S where fluents loc(agent,X,Y,S)  dir(agent,west,S)are true, executing the action do(agent,forward,S) the fluent has direct effect to turn fluent loc(agent,X+1,Y,res(S,do(agent,forward))) true • However, in most cases the truth value, plausibility or probability of those direct effect fluents are synchronically related to other fluents, triggering changes in those other fluents, called ramifications, or action indirect effects; • e.g., in situation S where fluents loc(agent,X,Y,S)  dir(agent,west,S) has(agent,gold,S)  has(agent,bow,S)  in(arrow,bow,S) are true, executing the action do(agent,forward) the fluent has as indirect effect to turn the fluents loc(gold,X+1,Y,res(S,do(agent,forward)))  loc(bow,X+1,Y,res(S,do(agent,forward)))  loc(arc,X+1,Y,res(S,do(agent,forward))) • Possibly recursive, separate synchronicramification axioms distinct from diachronic direct effect axiom, solve the representational ramification problem

  24. The Qualification Problem • Concise action precondition axioms with a few fluents are unrealistically optimistic in most environments • e.g., non-deterministic environments, environments perceived through noisy sensors, environments with a very wide variety of contingencies • for they implicitly assume a great many other fluents to also hold • While these fluents do hold in almost all normal circumstances in which the agent will attempt to execution, • they fail to hold in some abnormal, unusual circumstances, • leading the agent to create false expectation that a given action is executable • e.g., if the gold is in the same cavern of a dead wumpus, and its sticky green acid blood has spilled on it, then the agent cannot take it directly with its hands • Qualification problem: how to reason correctly about action preconditions, while avoiding extremely long and elusively exhaustive conjunction fluents? • Solutions generally involve some form of default or probabilistic reasoning

  25. Approaches to Belief Update • The situation calculus • The event calculus • Transaction logic • The fluent calculus

More Related