Csce 781 artificial intelligence the resolution refutation proof technique for first order logic
This presentation is the property of its rightful owner.
Sponsored Links
1 / 56

CSCE 781 Artificial Intelligence The Resolution Refutation Proof Technique for First-Order Logic PowerPoint PPT Presentation


  • 86 Views
  • Uploaded on
  • Presentation posted in: General

CSCE 781 Artificial Intelligence The Resolution Refutation Proof Technique for First-Order Logic. Spring 2013 Marco Valtorta [email protected] Acknowledgment. The slides are based mainly on:

Download Presentation

CSCE 781 Artificial Intelligence The Resolution Refutation Proof Technique for First-Order Logic

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Csce 781 artificial intelligence the resolution refutation proof technique for first order logic

CSCE 781Artificial IntelligenceThe Resolution Refutation Proof Technique for First-Order Logic

Spring 2013

Marco Valtorta

[email protected]


Acknowledgment

Acknowledgment

The slides are based mainly on:

  • Stuart Russell and Peter Norvig. Artificial Intelligence: A Modern Approach (3rd ed.). Prentice Hall, 2010.

  • Nils J. Nilsson. Principles of Artificial Intelligence. Tioga, 1980.


Outline

Outline

  • Reducing first-order inference to propositional inference

  • Unification

  • Generalized Modus Ponens

  • Forward chaining

  • Backward chaining

  • Resolution


Universal instantiation ui

Universal instantiation (UI)

  • Every instantiation of a universally quantified sentence is entailed by it:

    vαSubst({v/g}, α)

    for any variable v and ground term g

  • E.g., x King(x) Greedy(x)  Evil(x) yields:

    King(John) Greedy(John) Evil(John)

    King(Richard) Greedy(Richard) Evil(Richard)

    King(Father(John)) Greedy(Father(John)) Evil(Father(John))

    .

    .

    .


Existential instantiation ei

Existential instantiation (EI)

  • For any sentence α, variable v, and constant symbol k that does not appear elsewhere in the knowledge base:

    vα

    Subst({v/k}, α)

  • E.g., xCrown(x) OnHead(x,John) yields:

    Crown(C1) OnHead(C1,John)

    provided C1 is a new constant symbol, called a Skolem constant

  • Logical equivalence is not preserved, because skolemization adds new constants to formulas; however, the new KB is satisfiable iff the old one is satisfiable (s-equivalence)

  • In general, skolemization adds Skolem function, as in

    x y Is_Father(y,x), which skolemizes to x Is_Father(Father(x),x)


Reduction to propositional inference

Reduction to propositional inference

Suppose the KB contains just the following:

x King(x)  Greedy(x)  Evil(x)

King(John)

Greedy(John)

Brother(Richard,John)

  • Instantiating the universal sentence in all possible ways, we have:

    King(John)  Greedy(John)  Evil(John)

    King(Richard)  Greedy(Richard)  Evil(Richard)

    King(John)

    Greedy(John)

    Brother(Richard,John)

  • The new KB is propositionalized: proposition symbols are

    King(John), Greedy(John), Evil(John), King(Richard), etc.


Reduction ctd

Reduction ctd.

  • Every FOL KB can be propositionalized so as to preserve entailment

  • (A ground sentence is entailed by new KB iff entailed by original KB)

  • Idea: propositionalize KB and query, apply resolution, return result

  • Problem: with function symbols, there are infinitely many ground terms,

    • e.g., Father(Father(Father(John)))


Reduction ctd1

Reduction ctd.

Theorem: Herbrand (1930). If a sentence αis entailed by an FOL KB, it is entailed by a finite subset of the propositionalized KB

Note: Herbrand showed that no new constants have to be introduced (so, in the example, the only constants needed are John and Richard), except for one in case the KB contains no constants, in which case one constant must be introduced

Idea: For n = 0 to ∞ do

create a propositional KB by instantiating with depth-n terms

see if α is entailed by this KB

Problem: works if α is entailed, may loop forever if α is not entailed

Note: This (or more precisely the corresponding procedure for unsatisfiability) is called Gilmore’s procedure (1960).

Theorem: Turing (1936), Church (1936) Entailment for FOL is semidecidable (algorithms exist that say yes to every entailed sentence, but no algorithm exists that also says no to every non-entailed sentence.)


Problems with propositionalization

Problems withpropositionalization

  • Propositionalization seems to generate lots of irrelevant sentences.

  • E.g., from:

    x King(x)  Greedy(x)  Evil(x)

    King(John)

    y Greedy(y)

    Brother(Richard,John)

  • it seems obvious that Evil(John), but propositionalization produces lots of facts such as Greedy(Richard) that are irrelevant

  • With p k-ary predicates and n constants, there are p·nk instantiations.


Unification

Unification

  • We can get the inference immediately if we can find a substitution θ such that King(x) and Greedy(x) match King(John) and Greedy(y)

    θ = {x/John,y/John} works

  • Unify(α,β) = θ if αθ = βθ

    p q θ

    Knows(John,x) Knows(John,Jane)

    Knows(John,x)Knows(y,OJ)

    Knows(John,x) Knows(y,Mother(y))

    Knows(John,x)Knows(x,OJ)

  • Standardizing apart eliminates overlap of variables, e.g., Knows(z17,OJ)


Unification1

Unification

  • We can get the inference immediately if we can find a substitution θ such that King(x) and Greedy(x) match King(John) and Greedy(y)

    θ = {x/John,y/John} works

  • Unify(α,β) = θ if αθ = βθ

    p q θ

    Knows(John,x) Knows(John,Jane) {x/Jane}

    Knows(John,x)Knows(y,OJ)

    Knows(John,x) Knows(y,Mother(y))

    Knows(John,x)Knows(x,OJ)

  • Standardizing apart eliminates overlap of variables, e.g., Knows(z17,OJ)


Unification2

Unification

  • We can get the inference immediately if we can find a substitution θ such that King(x) and Greedy(x) match King(John) and Greedy(y)

    θ = {x/John,y/John} works

  • Unify(α,β) = θ if αθ = βθ

    p q θ

    Knows(John,x) Knows(John,Jane) {x/Jane}}

    Knows(John,x)Knows(y,OJ) {x/OJ,y/John}}

    Knows(John,x) Knows(y,Mother(y))

    Knows(John,x)Knows(x,OJ)

  • Standardizing apart eliminates overlap of variables, e.g., Knows(z17,OJ)


Unification3

Unification

  • We can get the inference immediately if we can find a substitution θ such that King(x) and Greedy(x) match King(John) and Greedy(y)

    θ = {x/John,y/John} works

  • Unify(α,β) = θ if αθ = βθ

    p q θ

    Knows(John,x) Knows(John,Jane) {x/Jane}}

    Knows(John,x)Knows(y,OJ) {x/OJ,y/John}}

    Knows(John,x) Knows(y,Mother(y)){y/John,x/Mother(John)}}

    Knows(John,x)Knows(x,OJ)

  • Standardizing apart eliminates overlap of variables, e.g., Knows(z17,OJ)


Unification4

Unification

  • We can get the inference immediately if we can find a substitution θ such that King(x) and Greedy(x) match King(John) and Greedy(y)

    θ = {x/John,y/John} works

  • Unify(α,β) = θ if αθ = βθ

    p q θ

    Knows(John,x) Knows(John,Jane) {x/Jane}}

    Knows(John,x)Knows(y,OJ) {x/OJ,y/John}}

    Knows(John,x) Knows(y,Mother(y)){y/John,x/Mother(John)}}

    Knows(John,x)Knows(x,OJ) {fail}

  • Standardizing apart eliminates overlap of variables, e.g., Knows(z17,OJ)


Unification5

Unification

  • To unify Knows(John,x) and Knows(y,z),

    θ = {y/John, x/z } or θ= {y/John, x/John, z/John}

  • The first unifier is more general than the second.

  • There is a single most general unifier (MGU) that is unique up to renaming of variables.

    MGU = { y/John, x/z }


The unification algorithm

The unification algorithm


The unification algorithm1

The unification algorithm

Occurs check: when unifying a variable x with a term T, check whether x occurs in T. Note that this should be done after the already found substitutions are applied.

For example, unifying t(x,f(x)) and t(g(y),y) “occur checks” because after x\g(y) is applied, one obtains t(g(y),f(g(y))) and t(g(y),y). Unifying f(g(y)) with y leads to the attempted substitution y\g(y), which triggers the occurs check, leading to failure.


Generalized modus ponens gmp

Generalized Modus Ponens (GMP)

p1', p2', … , pn', ( p1 p2 …  pnq)

E.g.,

p1' is King(John) p1 is King(x)

p2' is Greedy(y) p2 is Greedy(x)

θ is {x/John,y/John} q is Evil(x)

q θ is Evil(John)

  • GMP used with KB of definite clauses (exactly one positive literal)

  • All variables assumed universally quantified

where pi'θ = piθ for all i


Soundness of gmp

Soundness of GMP

  • Need to show that

    p1', …, pn', (p1 …  pn q) ╞ qθ

    provided that pi'θ = piθ for all I

  • Lemma: For any sentence p, we have p╞ pθ by UI

    • (p1 …  pn q) ╞ (p1 …  pn q)θ = (p1θ …  pnθ qθ)

    • p1', …, pn' ╞ p1'  …  pn' ╞ p1'θ …  pn'θ

    • From 1 and 2, qθ follows by ordinary Modus Ponens


Example knowledge base

Example knowledge base

  • The law says that it is a crime for an American to sell weapons to hostile nations. The country Nono, an enemy of America, has some missiles, and all of its missiles were sold to it by Colonel West, who is American.

  • Prove that Col. West is a criminal


Example knowledge base ctd

Example knowledge base ctd.

... it is a crime for an American to sell weapons to hostile nations:

American(x)  Weapon(y)  Sells(x,y,z)  Hostile(z)  Criminal(x)

Nono … has some missiles, i.e., x Owns(Nono,x)  Missile(x):

Owns(Nono,M1)  Missile(M1)

… all of its missiles were sold to it by Colonel West

Missile(x)  Owns(Nono,x)  Sells(West,x,Nono)

Missiles are weapons:

Missile(x)  Weapon(x)

An enemy of America counts as "hostile“:

Enemy(x,America)  Hostile(x)

West, who is American …

American(West)

The country Nono, an enemy of America …

Enemy(Nono,America)

Note: This definite clause KB has no functions. It is therefore a Datalog KB


Forward chaining algorithm

Forward chaining algorithm


Forward chaining proof

Forward chaining proof


Forward chaining proof1

Forward chaining proof


Forward chaining proof2

Forward chaining proof


Properties of forward chaining

Properties of forward chaining

  • Sound and complete for first-order definite clauses

  • Datalog = first-order definite clauses + no functions

  • FC terminates for Datalog in finite number of iterations

  • May not terminate in general if α is not entailed

  • This is unavoidable: entailment with definite clauses is semidecidable


Efficiency of forward chaining

Efficiency of forward chaining

Incremental forward chaining: no need to match a rule on iteration k if a premise wasn't added on iteration k-1

 match each rule whose premise contains a newly added positive literal

The rete algorithm builds a dataflow network to allow for reuse of matchings; it is used in production system languages such as OPS-5

Matching itself can be expensive:

Database indexing allows O(1) retrieval of known facts

  • e.g., query Missile(x) retrieves Missile(M1)

    Forward chaining is widely used in deductive databases


Hard matching example

Hard matching example

Diff(wa,nt)  Diff(wa,sa)  Diff(nt,q)  Diff(nt,sa)  Diff(q,nsw)  Diff(q,sa) Diff(nsw,v)  Diff(nsw,sa) Diff(v,sa)  Colorable()

Diff(Red,Blue) Diff (Red,Green) Diff(Green,Red) Diff(Green,Blue) Diff(Blue,Red) Diff(Blue,Green)

  • Colorable() is inferred iff the CSP has a solution

  • CSPs include 3SAT as a special case, hence matching is NP-hard


Backward chaining algorithm

Backward chaining algorithm

SUBST(COMPOSE(θ1, θ2), p) = SUBST(θ2, SUBST(θ1, p))


Backward chaining example

Backward chaining example


Backward chaining example1

Backward chaining example


Backward chaining example2

Backward chaining example


Backward chaining example3

Backward chaining example


Backward chaining example4

Backward chaining example


Backward chaining example5

Backward chaining example


Backward chaining example6

Backward chaining example


Backward chaining example7

Backward chaining example


Properties of backward chaining

Properties of backward chaining

  • Depth-first recursive proof search: space is linear in size of proof

  • Incomplete due to infinite loops

    •  fix by checking current goal against every goal on stack

  • Inefficient due to repeated subgoals (both success and failure)

    •  fix using caching of previous results (extra space)

  • Widely used for logic programming


Logic programming prolog

Logic programming: Prolog

  • Algorithm = Logic + Control

  • Basis: backward chaining with Horn clauses + bells & whistles

    Widely used in Europe, Japan (basis of 5th Generation project)

    Compilation techniques  60 million LIPS

  • Program = set of clauses = head :- literal1, … literaln.

    criminal(X) :- american(X), weapon(Y), sells(X,Y,Z), hostile(Z).

  • Depth-first, left-to-right backward chaining

  • Built-in predicates for arithmetic etc., e.g., X is Y*Z+3

  • Built-in predicates that have side effects (e.g., input and output predicates, assert/retract predicates)

  • Closed-world assumption ("negation as failure")

    • e.g., given alive(X) :- not dead(X).

    • alive(joe) succeeds if dead(joe) fails


Prolog

Prolog

  • Appending two lists to produce a third:

    append([],Y,Y).

    append([X|L],Y,[X|Z]) :- append(L,Y,Z).

  • query: ?-append(A,B,[1,2])

  • answers: A=[] B=[1,2]

    A=[1] B=[2]

    A=[1,2] B=[]


Backward chaining is incomplete

Backward chaining is incomplete

  • Goal trees are built by backward chaining

  • C is true, as shown in the following (natural deduction) proof by cases (split on D)

  • It is impossible to prove C by backward chaining

  • Two additions make backward chaining complete:

    • addition of contrapositives (ex: ~C -> ~D)

    • ancestor checking


Pttp a prolog technology theorem prover

PTTP: A Prolog Technology Theorem Prover

  • Prolog is not a full theorem prover for three main reasons:

    • It uses an unsound unification algorithm without the occurs check

    • Its inference system is complete for Horn clauses, but not for more general formulas, as shown in the previous slide

    • Its unbounded depth-first search strategy is incomplete. Also, it cannot display the proofs it finds

  • The Prolog Technology Theorem Prover (PTTP) overcomes these limitations by

    • transforming clauses so that head literals have no repeated variables and unification without the occurs check is valid; remaining unification is done using complete unification with the occurs check in the body

    • adding contrapositives of clauses (so that any literal, not just a distinguished head literal, can be resolved on) and the model- elimination procedure reduction rule that matches goals with the negations of their ancestor goals

    • using a sequence of bounded depth-first searches to prove a theorem;

    • retaining information on what formulas are used for each inference so that the proof can be printed

  • Proof of soundness and completeness follows from the soundness and completeness of resolution refutation


Resolution brief summary

Resolution: brief summary

  • Full first-order version:

    l1···lk, m1···mn

    (l1···li-1li+1 ···lkm1···mj-1mj+1···mn)θ

    where Unify(li, mj) = θ.

  • The two clauses are assumed to be standardized apart so that they share no variables.

  • For example,

    Rich(x) Unhappy(x)

    Rich(Ken)

    Unhappy(Ken)

    with θ = {x/Ken}

  • Apply resolution steps to CNF(KB α); complete for FOL

  • Full disclosure: For completeness, one needs either

    • general (non-binary) resolution, in which subsets of literals that are unifiable are resolved, or

    • Factoring, which replaces unifiable atoms literals in a clause with a single atom


Conversion to cnf

Conversion to CNF

  • Everyone who loves all animals is loved by someone:

    x [y Animal(y) Loves(x,y)]  [y Loves(y,x)]

    1. Eliminate biconditionals and implications

    x [y Animal(y) Loves(x,y)]  [yLoves(y,x)]

    2. Move  inwards: x p ≡x p,  x p ≡x p

    x [y (Animal(y) Loves(x,y))]  [y Loves(y,x)]

    x [y Animal(y) Loves(x,y)]  [y Loves(y,x)]

    x [y Animal(y) Loves(x,y)]  [y Loves(y,x)]


Conversion to cnf contd

Conversion to CNF contd.

  • Standardize variables: each quantifier should use a different one

    x [y Animal(y) Loves(x,y)]  [z Loves(z,x)]

  • Skolemize: a more general form of existential instantiation.

    Each existential variable is replaced by a Skolem function of the enclosing universally quantified variables:

    x [Animal(F(x)) Loves(x,F(x))] Loves(G(x),x)

  • Drop universal quantifiers:

    [Animal(F(x)) Loves(x,F(x))] Loves(G(x),x)

  • Distribute  over  :

    [Animal(F(x)) Loves(G(x),x)]  [Loves(x,F(x)) Loves(G(x),x)]


Completeness of resolution

Completeness of Resolution

  • Resolution is refutation-complete: if S is an unsatisfiable set of clauses, then the application of a finite number of resolution steps to S will yield a contradiction


Resolution proof definite clauses

Resolution proof: definite clauses

  • This is an input resolution proof


Resolution proof general case

Resolution proof: general case

  • Curiosity killed the cat: pp.298-300 [AIMA-2]

  • Not a unit resolution proof; not an input resolution proof


Resolution strategies

Resolution Strategies

  • Breadth-First Strategy (complete)

  • Set-of-Support Strategy (complete)

    • At least one parent of each resolvent is selected from among the clauses resulting from the negation of the goal or from their descendants (the set of support)

  • Unit-Preference Strategy (complete)

  • Unit Resolution (complete for Horn clauses: forward chaining)

    • Each resolvent has a parent that is a unit clause

  • Linear-Input Form Strategy (not complete)

    • Each resolvent has at least one parent belonging to the base set (i.e. the set of clauses given as input)

    • Complete for Horn clauses

    • Called “input resolution” in [AIMA]

  • Ancestry-Filtered Form Strategy (complete)

    • Each resolvent has a parent that is either in the base set or an ancestor of the other parent

    • Called “linear resolution” in [AIMA]


Example kb nilsson 1980

Example KB [Nilsson, 1980]

  • Whoever can read is literate

    • (1) ~R(x)  L(x)

  • Dolphins are not literate

    • (2) ~D(x)  ~L(x)

  • Some dolphins are intelligent

    • (3a) D(A)

    • (3b) I(A)

  • Negated Goal: Some who are intelligent cannot read

    • (4’) ~I(z) R(z)

  • Whoever can read is literate

    • x[R(x) L(x)]

  • Dolphins are not literate

    • x[D(x) ~L(x)]

  • Some dolphins are intelligent

    • x[D(x)  I(x)]

  • Goal: Some who are intelligent cannot read

    • x[I(x)  ~R(x)]


An example proof

An example proof

  • (5) R(A) resolvent of 3b and 4

  • (6) L(A) resolvent of 5 and 1

  • (7) ~D(A) resolvent of 6 and 2

  • (8) NIL resolvent of 7 and 3a


Breadth first strategy

Breadth-first strategy


Set of support strategy

Set-of-Support Strategy


Linear input form strategy

Linear-Input Form Strategy


Ancestry filtered form strategy

Ancestry-Filtered Form Strategy


Four more examples

Four more examples

  • Prove by resolution the result of exercise 8.2 [AIMA-2]

    • The goal is a universal formula!

  • Group theory axioms and some consequences of them [Schoening, example on pp.94-95]

  • (a) every dragon is happy if all its children can fly; (b) green dragons can fly; (c) a dragon is green if it is a child of at least one green dragon; show that all green dragons are happy

  • (a) Every barber shaves all persons who do not shave themselves; (b) no barber shaves any person who shaves himself or herself; show that there are no barbers

    • Shows the need for factoring or non-binary resolution (p.297 [AIMA-2])


  • Login