Loading in 5 sec....

Lecture 9: Resolution in First Order LogicPowerPoint Presentation

Lecture 9: Resolution in First Order Logic

- 106 Views
- Uploaded on
- Presentation posted in: General

Lecture 9: Resolution in First Order Logic

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Lecture 9: Resolution in First Order Logic

Theorem Proving in First Order Logic

Unification

Resolution

Heshaam Faili

hfaili@ece.ut.ac.ir

University of Tehran

- If a proof procedure is always guaranteed to find a proof of a particular sentence or of its negation, then the question of logical implication for the sentence is said to be decidable.
- To see if KB |= c, we generate all possible inferences from KB, stopping when we get f or ~c.

- In general, neither f nor ~c may be logically implied by KB.
- In this case, the proof procedure will never stop.
- The question of logical implication in this case is thus semidecidable:
- If KB |= c or KB |= ~c , the proof procedure will eventually discover that. Otherwise, the procedure will run forever.

- Universal and Existential Elimination
- Existential Introduction

- Mechanical proof procedure with a single rule
- Invented by J. A. Robinson in 1965
- Three main steps: 1. Clause form: transform all clauses to uniform format 2. Unification: find a variable assignment so that two clauses can be resolved 3. Resolution: rule to obtain new conclusions

- We want a uniform representation for our logical propositions, so that we can use a simple uniform proof procedure.
- Clausal form expresses all logical propositions using literals (atomic sentences or negated atomic sentences) and clauses (sets of literals representing their disjunction).
- Ex: all the following in a single form:a => b ~abb ~a ~(~a ~b)

- Any set of sentences in predicate calculus can be converted to clausal form.
- Eight-step procedure for conversion:
- 1. Replace implications
- 2. Distribute negations
- 3. Standardize variables
- 4. Replace existential
- 5. Remove universals
- 6. Distribute disjunctions
- 7. Replace operators
- 8. Rename variables

Clause Form Conversion

Any sentence of the form: j => y becomes ~j y at all levels

2. Distribute negations

Repeatedly do the following:

~~ y becomes y

~(j y) becomes ~j ~y

~(j y) becomes ~j ~y

~Vj becomes V ~j ~Vj becomes V ~j

3. Standardize variables

Rename variables to ensure that each quantifier has its own unique variable.

Ex: (Xp(X)) (X q(X)) becomes (Xp(X)) (Y q(Y))

Consider: (YX p(X,Y))

The identity of X depends on the value of Y. We can replace X by a function of Y: (Yp(g(X),Y))where g is a Skolem function

4. Replace existential

- Replace each occurrence of an existentially quantified variable by a Skolem function.
- The function’s arguments are the universally quantified variables that are bound by universal quantifiers whose scope includes the scope of the existential quantifier being eliminated.
- The Skolem functions must be new, i.e., not already present in any other sentences.
- If the existential quantifier is not within a universal quantifier, use a Skolem function of no arguments, i.e., a constant.

1.X p(X)becomes p(g)

2. YXp(X,Y)becomes Yp(g(Y),Y)

3.X Yp(X,Y)becomes Yp(g,Y)

4.YX (((q(X) p(X,Y))ZW(r(W)))

becomes

Y (((q(f(Y)) p(f(Y),Y))Z(r(g(Y,Z)))

where f and g are Skolem functions

Throw away "’s and assume all variables are universally quantified."X "Y "Z q(h(Y)) p(h(Y),X) r(X,Z) becomes q(h(Y)) p(h(Y),X) r(X,Z)

6. Distribute disjunctions

Write the formula in conjunctive normal form (the conjunction of sets of disjunctions) with:

j (y q) becomes (j y) (j q)

7. Replace operators

- Replace the conjunction
- S1 S2 …. Sn
- with the set of clausesS1 , S2 , …, Sn
- Convert each Si into a set of literals (that is, get rid of the disjunctions symbols “”, and write them in clause notation.
- Ex: (p(X) q(Y)) (p(X) ~r(X,Y))
- becomes two clauses:
- {p(X), q(Y)}
- {p(X), ~r(X,Y)}

8. Rename variables

Change variable names so that no variable symbol appears in more than one clause:

Ex: {p(X), q(Y)} and {p(X), ~r(X,Y)}

becomes {p(X), q(Y)} and {p(Z), ~r(Z,W)}

we can do this because:

"X "Y(p(X) q(Y)) (p(X) ~r(X,Y))

is equivalent to

"X "Y(p(X) q(Y)) "Z "W(p(Z) ~r(Z,W))

"X (p(X) => (("Yp(Y) => (p(f(X,Y)))

(~"Y (~q(X,Y) p(Y))))

1. "X (~p(X) (("Y~p(Y) (p(f(X,Y)))

(~"Y (~q(X,Y) p(Y))))

2. "X (~p(X) (("Y ~p(Y) (p(f(X,Y))

(Y(q(X,Y) ~p(Y))))

3. "X (~p(X) (("Y ~p(Y) (p(f(X,Y)))

(W (q(X,W) ~p(W))))

4. "X (~p(X) (("Y ~p(Y) (p(f(X,Y)))

(q(X,g(X)) ~p(g(X)))))

5. ~p(X) ((~p(Y) (p(f(X,Y)))

(q(X,g(X)) ~p(g(X)))))

6. (~p(X) ~p(Y)) (p(f(X,Y)))

(~p(X) (q(X,g(X)))

(~p(X) ~p(g(X)))))

7. {~p(X), ~p(Y)),p(f(X,Y))},

{~p(X) , q(X,g(X)))},

{~p(X), ~p(g(X))}

8. {~p(X1), ~p(Y1)), p(f(X1,Y1))},

{~p(X2) , q(X2,g(X2)))},

{~p(X3), ~p(g(X3))}

- Unification is the process of determining whether two expressions can be made identical by appropriate substitutions for their variables.
- The substitution applies to variables of both expressions and makes them syntactically equivalent. The result is a substitution instance
- Example of a unifying substitution
S1: p(X,f(b),Z) S2: p(a,Y,W)

U = {X/a Y/f(b), Z/W}

S1[U] = S2[U] = p(a,f(b),W)substitution instance

1. Each variable is associated with at most one expression;

2. No variable with an associated expression occurs within any associated expression. That is, no “left side” appears on a “right side”:

U = {X/a, Y/f(X), Z/Y}

is not a legal substitution

Unification of sentences

A set of expressions {S1,S2,….Sn}

is unifiable if there is a substitution U that makes them identical:

S1[U] =S2 [U] =… = Sn [U]

Ex: U = {X/a, Y/b, Z/c} unifies the expressions p(a,Y,Z) and p(X,b,Z) and p(a,Y,c)

Unifiers are not unique!

- Two expressions can have more that one unifier that makes them equivalent:
S1 = p(X,Y,Y) and S2 = p(a,Z,Z)

are unified by

U1={X/a,Y/Z} and U2={X/a,Y/b, Z/b}

S1[U1] = S2[U1] = p(a,Z,Z) S1[U2] = S2[U2] = p(a,b,b)

- Some unifiers are more general than others:
U1 is more general than U2 if there exists a unifier U3 such that U1U3 = U2

Example:

U1 = {X/f(Y),Z/W,R/c} is more general that

U2 = {X/f(a),Z/b, R/c} since U3 = {Y/a,W/b}

U1 U3 ={X/f(Y),Z/W,R/c}{Y/a,W/b} = U2

- Unifiers form a partial order

- For each set of sentences, there exists a most general unifier (mgu) that is unique up to variable renaming
Ex: for S1 = p(X,Y,f(Z)) and S2 = p(a,W,R) U={X/a,Y/W,R/f(Z)} is the mgu

- For the resolution procedure, we want to find the most general unifier of literals: a constant, variable, or a functional object of length n.

- Constant(exp) returns true if exp is a constantEx: a, f(g(a,b,c)) are constants
- Variable(exp) returns true if exp is a simple variable
- Length(exp) returns the number of items in a function Ex: f(a,g(Y)) is an object of length 2.
- MguVar(Var, exp) returns
- false if Var is included in exp
- {Var/exp} otherwise

functionMgu(exp1,exp2) returns unifier

if exp1 = exp2 return {}

if Variable(exp1) then return MguVar(exp1,exp2)

if Variable(exp2) then return MguVar(exp2,exp1)

if Constant(exp1) or Constant(exp1) return false

if not(Length(exp1) = Length(exp2) return false

unifier := {}

for i := 1 toLength(exp1) do

s := Mgu(Part(exp1,i),Part(exp2,i))

if s is false return false

unifier := Compose(unifier,s)

exp1 := Substitute(exp1,unifier)

exp2 := Substitute(exp2,unifier)

return unifier; end

Example of MGU trace (1)

Find the mgu unifier for

p(f(X,g(a,Y)), g(a,Y)) and p(f(X,Z),Z).

1. Mgu is called on p and p, returns {}

2. Mgu is called recursively onf(X, g(a,Y)) and f(X,Z)

3. Mgu is called on f and f, returns {}

4. Mgu is called on X and X, returns {}

5. Mgu is called on g(a,Y) and Z; since Z is a variable, it returns (via Mguvar) {Z/g(a,Y)} after checking that Z is not in g(a,Y).

6. {Z/g(a,Y)} is composed with the previous (empty) substitution

7. The entire substitution is applied to both expressions, yieldingf(X, g(a,Y))

8. Since i = 3, Mgu returns {Z/g(a,Y)}, which is then applied to the top level expressions. All other checks show equality. The result is

p(f(X,g(a,Y)),g(a,Y))

- Predicate Indexing: Storing the answers of some queries
- Knows(John, x) : store all x (in a hash table) which John knows
- Using subsumption lattice to be more efficient

Given a clause containing the literal j and another clause the literal ~j , we can infer the clause consisting of all the literals of both clauses without j and ~j.

Ex:1. {p, q}KB

2. {~q, r}KB

3. {p, r}1,2

Consider the two clauses

{winter, summer}

{~winter, cold}

At any point, winter is either true or false. If it is true, then cold must be true to guarantee the truth of clause 2. If winter is false, then summer must be true to guarantee the truth of clause 1. So regardless of winter's truth value, either cold or summer must be true, i.e., we can conclude:

{cold, summer}

1. {p, q}KB

2. {~p, q}KB

3. {q}1,2

1. {p}KB

2. {~p}KB

3. {}1,2

merge the q's

1. {~p, q} KB

2. {p} KB

3. {q} 1,2

much like Modus Ponens

We can derive the empty clause, showing

that the KB has a contradiction

Resolution Rule

For clauses containing variables, we can resolve j in one clause with ~y in another clause, as long as j and y have a mgu U. The resulting clause is the union of the original 2 clauses, with j and ~y removed, and with U applied to the remaining literals.

{r,j}and {~y,d}

{r[U],d[U]}

where j [U] = y [U]

Examples of Resolutions

- Ex1: 1. {p(X), q(X,Y)}KB
- 2. {~p(a), r(b,Z)}KB
- 3. {q(a,Y), r(b,Z)}1,2

- Ex2: Two clauses may resolve in more than one way since f and ~y may be chosen in different ways:
1. {p(X,X), q(X), r(X)}KB

2. {~p(a,Z), ~q(b)}KB

3. {q(a), r(a),~q(b)}1,2

4. {p(b,b), r(b), ~p(a,Z)}1,2

Resolution Deduction

A resolution deduction of a clause j from a data base KB is a sequence of clauses in which

1. j is an element of the sequence;

2. Each element is either a member of KB or the result of applying the resolution principle to clauses earlier in the sequence.

Resolution procedure with

nondeterministic choices

functionResolution(KB) returns answer

while not(Empty_Clause(KB)) do

c1:= Choose_Clause(KB)

c2:= Choose_Clause(KB)

res := Choose_Resolvents(c1,c2)

KB := KB U {res}

end

return true

end

1. { p }KB

2. {~p, q }KB

3. { ~q, r }KB

4. { ~r }KB

5. {q}1, 2

6. { ~q}3, 4

7. { }5, 6

{p}

{~p,q}

{~q,r}

{~r}

{q}

{~q}

{~p,r}

{r}

{r}

{ }

{~p}

{~p}

All possible resolutions (to 3 levels)

A “resolution trace” is a linear form of the graph

Resolution Refutation

- As for Propositional Logic, we will use refutation resolution as the single rule for proving sentences about a KB
- We will use it to prove the unsatisfiability of a set of clauses, i.e., they contain a contradiction if we can derive the empty clause.
- Resolution refutation: to prove c, prove thatKB U {~c} |= 0like a “proof by contradiction”
- Rule is refutation-complete.

- True/False questions: we want to know if a conclusion follows from KB:
Ex:father(art, jon)

father(bob, kim)

father(X, Y) => parent(X, Y) Is art the parent of jon?

- Queries: fill-in-the blank: we want to know also what instantiation makes it true: Who is the parent of jon?

1. {father(art, jon)}KB

2. {father(bob, kim)}KB

3. {~father(X,Y), parent(X,Y)}KB

4. {~parent(art,jon)}

5. {parent(art,jon)}1, 3

6. {parent(bob, kim)}2, 3

7. {~father(art, jon)}3, 4

8. { }4, 5

9. { }1, 7

negated goal clause

- To obtain one instantiation that makes the conclusion true (if any), form a disjunction of the negation of the goal c and its “answer literal”: a term of form
ans(X1,X2,…Xn)

where the variables X1,X2,…Xn are the free variables in c.Ex: Add to the previous KB the clause{~parent(X, jon), ans(X)}

- Resolution halts when it derives a clause containing only the ans literal

Example of fill-the-blank derivation

1. {father(art, jon)}KB

2. {father(bob, kim)}KB

3. {~father(X,Y), parent(X,Y)}KB

4. {~parent(Z,jon), ans(Z)}c

5. {parent(art, jon)}1, 3

6. {parent(bob, kim)}2, 3

7. {~father(w,jon), ans(W)}3, 4

8. {ans(art) }4, 5

9. {ans(art) }1, 7

Properties of Fill-in-the-blank

- The answer may not be unique: several different answers may result from the proof, depending on the clauses that were chosen for resolution.
- We may also get an answer of the form{ans(a), ans(b) } where one of the answers is right, but the clause doesn’t tell us which one.
1.{father(art,jon), father(bob,jon)} KB 2. {~ father(X,jon),ans(X)} c3.{father(bob,jon),ans(art)}1, 24. {ans(art),ans(bob)}2, 3

- k : number of predicate argument
- p: number of predicates
- n: number of constant symbols
- pnk distinct ground facts
- Propositional logic with above facts

- Problems of simple FC
- Inner loop involve pattern matching
- Algorithm rechecks every rule in each loop
- Might generate many irrelevant facts

- Should be addressed these problems

- Every finite-domain CSP can be expressed as a single definite clause together with some associated ground facts.

- Every new fact inferred on iteration t must be derived from at least — one new fact inferred on iteration t - 1.
- Retain the partial matching of any rule inorder to use it next iterations

- 3 ways to avoid:
- Use backward chaining
- Restrict subset of rules
- Use magic set , some information about the goal in order to ignore irrelevant facts …

Algorithm = Logic + Control

- Prolog = Programming in Logic

?