Knowledge based agents
This presentation is the property of its rightful owner.
Sponsored Links
1 / 46

Knowledge-Based Agents PowerPoint PPT Presentation


  • 56 Views
  • Uploaded on
  • Presentation posted in: General

Knowledge-Based Agents. Jacques Robin. Introductory Example: Is West criminal?. Is West criminal? “US law stipulates that it is a crime for a US citizen to sell weapons to a hostile country. Nono owns missiles, all of them it bought from Captain West, an american citizen”

Download Presentation

Knowledge-Based Agents

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Knowledge based agents

Knowledge-Based Agents

Jacques Robin


Introductory example is west criminal

Introductory Example:Is West criminal?

  • Is West criminal?

    • “US law stipulates that it is a crime for a US citizen to sell weapons to a hostile country. Nono owns missiles, all of them it bought from Captain West, an american citizen”

  • How to solve this simple classification problem?

  • Using a knowledge-based agent:

    • Identify knowledge about the decision domain

    • Represent it using a formal language which it is possible to perform automated reasoning

    • Implement (or reuse) an inference engine that perform such reasoning


Knowledge based agents1

Ask

Tell

Retract

Knowledge-Based Agents

Environment

Automated Reasoning

Sensors

Domain-Specific

Knowledge

Base

Generic

Domain-Independent

Inference

Engine

Knowledge Representation

and Acquisition

Effectors


What is knowledge

What is Knowledge?

  • Data, information or abstraction formatted in a way that allows a human or machine to reason with it, deriving from it new data, information or abstraction, ex:

    • Classes and objects

    • Logical formulas

    • Prior and conditional probability distributions over a set of random variables

  • Q: What is reasoning?

  • A: Systematic mechanism to infer or derive new knowledge from new percepts and/or prior knowledge, ex:

    • Inheritance of attributes from classes its sub-classes and objects

    • Classical First-Order Logic (CFOL) theorem proving using refutation, resolution and unification

    • Computing posterior probability calculus from prior and conditional ones using Bayes theorem


Example kba logic based agent

Example KBA: Logic-Based Agent

Given B as axiom, formula f is a theorem of L?

B |=L f ?

Environment

Sensors

Ask

Inference

Engine:

Theorem

Prover

for Logic L

Knowledge

Base B:Domain Model

in Logic L

Tell

Retract

Actuators


Example of automated reasoning with knowledge base is west criminal

From the knowledge:

1. It is crime for a US citizen to sell weapons to a hostile nation

2. Nono own missiles

3. Nono bought all its missiles from Captain West

4. West is a US Citizen

5. Nono is a nation

6. Nono is an enemy of the USA

7 . A missile is a weapon

8. Enmity is the highest form of hostilty

9. The USA is a nation

Can we infer the knowledge Q:

0. West is a criminal?

Representing this knowledge in CFOL KB:

( P,W,N american(P)  weapon(W) nation(N)  hostile(N) sells(P,N,W)  criminal(P)) //1

 ( W owns(nono,W)  missile(W)) //2

 ( W owns(nono,W)  missile(W) sells(west,nono,W)) //3

 american(west) //4

 nation(nono) //5

 enemy(nono,america) //6

 W missile(W)  weapon(W) //7

 N enemy(N,america)  hostile(N) //8

 nation(america) //9

A CFOL theorem prover can be usedto answer the query:

KB |= criminoso(west) //0

Example of Automated Reasoning with Knowledge Base: Is West Criminal?


Review of cfol general syntax

FCLUnaryConnective

FCLBinaryConnective

Arg

Arg

Connective: enum{}

Connective: enum{, , , }

1..*

1..*

Functor

Arg

FCLConnective

FCFOLAtomicFormula

FCFOLTerm

1..2

Functor

*

FCFOLNonFunctionalTerm

FCFOLFunctionalTerm

PredicateSymbol

Functor

Arg

1..*

FunctionSymbol

FOLVariable

ConstantSymbol

QuantifierExpression

Quantifier: enum{,}

Review of CFOL: General Syntax

FCFOLFormula

Example formula: X,Y (p(f(X),Y) q(g(a,b)))  ((U,V Z ((X = a)  r(Z))  (U = h(V,Z)))))


Review of cfol implicative normal form inf

Premisse

*

*

Arg

Arg

INFCFOLFormula

1..*

1..*

Functor = 

Conclusion

*

FCFOLAtomicFormula

FCFOLTerm

Functor

FCFOLFunctionalTerm

FCFOLNonFunctionalTerm

PredicateSymbol

*

Functor

FunctionSymbol

INFCFOLClause

INFCLPLHS

INFCLPRHS

FOLVariable

ConstantSymbol

Functor = 

Functor = 

Functor = 

Review of CFOL:Implicative Normal Form (INF)

  • Implicative normal form:

    • Conjunction of implications from atom conjunctions to atom disjunctions with implicit, universal only quantifiers

    • For any CFOL formular there is an equivalent INF formula

  • Skolemization:

    • Substitute each existentially quantified variable by a new, distinct constant

    • ex, x míssil(x) by míssil(m1)

Example INF formula: ((p(f(X),Y)  q(g(a,b))  c) ((X = a)  r(Z)))

 ((p(U,V)  q(a,U)) (d  e  p(c,f(V)))


Review of cfol term unification

X/b

X/b

p

p

p

p

p

p

p

p

p

fail

X/a

Y/a

a

b

a

a

a

a

a

a

X

X

X

X

X

X

Y

X

b

b

X/f(c,Z)

X/f(c,d)

Y/a

Y/a

Z/d

p

p

p

p

X/p(a,X)

X

X

X/p(a,X)

p

fail

Y

Y

f

f

a

a

f

f

a

p

Guarantees termination

c

c

Z

Z

c

c

Z

d

c

p

Review of CFOL: Term Unification

Failure by Occur-Check


Review of cfol refutation and resolution

Review of CFOL:Refutation and Resolution

  • Refutation:

    • Proving KB |= Q is equivalent to proving (KB  Q) |= true, itself equivalent to proving (KB  Q) |= false

    • Why? (KB  Q)  (KB  Q)  (KB  Q)  (KB  Q)

  • Resolution:

    • Binary Propositional case: ((A  B)  (B  C))  (A  C)A  B and B  C resolve in A  C

    • Binary First-Order Case:(A  B)  (C  D)  (B) = (C)  ((A)  (D))where  is a set of variable substitutions that unify B with CA  B and C  D resolve in (A)  (D) through the unification of B and C with 

    • N-ary First-Order Case: if (Pi) = (Dj) then (P1 ... Pn  C1 ...  Cm) and (Q1 ... Qk  D1 ...  Dl) resolve in (P1) ... (Pi-1)  (Pi+1) ... (Pn)  (Q1) ... (Qk))  ((C1) ...  (Cm)  (D1) ...  (Dj-1)  (Dj+1) ...  (Dn))


Example of automated reasoning with knowledge base is west criminal1

Refutation proof: showing that following KB’ = KB  Q

( P,W,N american(P)  weapon(W)  nation(N)  hostile(N) sells(P,N,W)  criminal(P)) //1

( W owns(nono,W)  missile(W)) //2

 ( W owns(nono,W)  missile(W) sells(west,nono,W)) //3

 american(west) //4

 nation(nono) //5

 enemy(nono,america) //6

 W missile(W)  weapon(W) //7

 N enemy(N,america)  hostile(N) //8

 nation(america) //9

 criminoso(west)//0

is inconsistent, i.e., that false can be derived from it

Step1: generate the implicative normal form KB’’ or KB’

(american(P)  weapon(W)  nation(N)  hostile(N)  sells(P,N,W)  criminal(P)) // 1

T owns(nono,m1) // skolemização 2a

T missile(m1) // 2b

 (owns(nono,W)  missile(W)  sells(west,nono,W)) //3

 T  american(west) //4

 T  nation(nono) //5

 T  enemy(nono,america) //6

 missile(W)  weapon(W) //7

 enemy(N,america)  hostile(N) //8

 T nation(america) //9

 criminoso(west)  F//0

Example of Automated Reasoning with Knowledge Base: Is West Criminal?


Example of automated reasoning with knowledge base is west criminal2

Step 2: repeatedly apply resolution rule to pair of clauses (A,B) where A’s premise unifies with B’s conclusion

(american(P)  weapon(W) 

nation(N)  hostile(N) 

sells(P,N,W)  criminal(P)) //1

 (T  owns(nono,m1)) //2a

 (T  missile(m1)) //2b

 (owns(nono,W)  missile(W) sells(west,nono,W)) //3

 (T  american(west)) //4

 (T  nation(nono)) //5

 (T  enemy(nono,america)) //6

 (missile(W)  weapon(W)) //7

 (enemy(N,america) hostile(N)) //8

 (T  nation(america)) //9

 (criminal(west)  F)//0

Resolve 0 with 1 unifying P/west:

american(west)  weapon(W)  nation(N) 

hostile(N)  sells(west,N,W)  F //10

2. Resolve 10 with 4:

weapon(W)  nation(N)  hostile(N) 

sells(west,N,W)  F //11

3. Resolve 11 with 7:

missile(W)  nation(N)  hostile(N) 

sells(west,N,W)  F //12

4. Resolve 12 with 2b unifying W/m1:

nation(N)  hostile(N)  sells(west,N,m1)  F //13

5. Resolve 13 with 5 unifying N/nono:

hostile(nono)  sells(west,nono,m1)  F //14

6. Resolve 14 with 8 unifying N/nono:

enemy(nono,america)  sells(west,nono,m1)  F //15

7. Resolve 15 with 6:

sells(west,nono,m1)  F //16

8. Resolve 16 with 3 unifying W/m1:

owns(nono,m1)  missile(m1)  F //17

9. Resolve 17 with2a:

missile(m1)  F //18

10. Resolve 18 with 2b: F

Example of Automated Reasoning with Knowledge Base: Is West Criminal?


Dimensions of knowledge classification

Dimensions of Knowledge Classification

  • Knowledge in a KBA can be characterized along the following (largely orthogonal) categorization dimensions:

    • Intentional x Extensional

    • Persistent x Volatile

    • Structural x Behavioral

    • Diagnostic x Causal

    • Synchronous x Diachronous

    • Certain x Uncertain

    • Explicit x Implicit

    • Precise x Vague

    • Declarative x Procedural

    • Common Sense x Expert

    • Domain-Level x Meta-Level


Intentional x extensional knowledge

Intentional knowledge: about classes of entities and their generic relationships

Domain concept hierarchy:

ex, X, wumpus(X)  monster(X).

Domain integrity constraints:

ex, X,Y wumpus(X)  wumpus(Y)  X = Y.

Domain behavior laws:

ex, X,Y smelly(X,Y)  (loc(wumpus,X+1,Y)  loc(wumpus,X-1,Y) loc(wumpus,X,Y+1)  loc(wumpus,X,Y-1).

Database schema

Object-Oriented Programming (OOP) classes

Universally quantified CFOL formulas

Document Schema (XML schema)

Extensional knowledge: about specific entity instances and their particular relationships

Facts, propositions about concept instances

ex, loc(wumpus,2,1)  loc(wumpus,1,2)  loc(wumpus,2,3)alive(wumpus,4)

ex,alive(wumpus,7).

Data (databases)

Examples (machine learning)

Cases (case-base reasoning)

OOPobjects

Ground CFOL formulas

Classical Propositional Logic (CPL) formula

Document (XML)

Intentional x Extensional Knowledge


Persistent x volatile knowledge

Persistent x Volatile Knowledge

  • Persistent Knowledge:

    • Valid during the lifetime of the agent across several task that it carries out ≈ a program

    • ex, X,Y,T smelly(X,Y,T)  (loc(wumpus,X+1,Y,T)  loc(wumpus,X-1,Y,T) loc(wumpus,X,Y+1,T)  loc(wumpus,X,Y-1,T)

    • Generally but not necessarily intentional

  • Volatile Knowledge:

    • Temporary, buffer knowledge, valid only during the execution context of one particular task of the agent lifetime ≈ data

    • ex, loc(wumpus,2,1,T4)  loc(wumpus,1,2,T4)  loc(wumpus,2,3,T4)alive(wumpus,4,T4)

    • ex,alive(wumpus,T7).

    • Generally but not necessarily extensional


Structural x behavioral knowledge

Structural x Behavioral Knowledge

  • Structural knowledge:

    • Specifies the properties, relations and types of domain entities

    • Key part are a generalization taxonomy and integrity constraints

    • ex, M, wumpus(M)  monster(M).  M,T monster(M)  alive(M,T)  dangerous(M,T). M,X,Y,T dangerous(M,T)  loc(M,X,Y,T)   safe(X,Y,T).

  • Behavioral knowledge:

    • Specifies the state changes of domain entities, the event they participates to, the actions they perform

    • ex , X,Y,T loc(agent,X,Y,T)  orientation(0,T)  forward(T)   loc(wall,X,Y+1)  loc(agent,X,Y+1,T+1).


Causal x diagnostic knowledge

Causal x Diagnostic Knowledge

  • Causal knowledge:

    • Predictive model from cause to effect

    • ex,X,Y,T loc(agent,X,Y,T)  orientation(0,T)  forward(T)   loc(wall,X,Y+1)  loc(agent,X,Y+1,T+1).

  • Diagnostic knowledge:

    • Hypothesis forming model from observed effects to plausible causes

    • ex,  X,Y,T smell(stench,X,Y,T)  smelly(X,Y). X,Y smelly(X,Y)  (loc(wumpus,X+1,Y)  loc(wumpus,X-1,Y)  loc(wumpus,X,Y+1)  loc(wumpus,X,Y-1)).


Synchronous x diachronous knowledge

Synchronous x Diachronous Knowledge

  • Diachronous knowledge:

    • Describes the relation between the value of a given property of the environment before the occurrence of an event, with the value of that sameproperty after the occurrence of that event

    • Links an event occurrence with its pre and post-conditions

    • ex,X,Y,T loc(agent,X,Y,T)  orientation(0,T)  forward(T)   loc(wall,X,Y+1)  loc(agent,X,Y+1,T+1).

  • Synchronous knowledge:

    • Describes the relation between the value of two distinctproperties of the environment that hold at the same time

    • Domain event invariants

    • ex,M,X,Y,T dangerous(M,T)  loc(M,X,Y,T)   safe(X,Y,T).


Certain x uncertain knowledge

Certain x Uncertain Knowledge

  • Certain knowledge:

    • Statement epistemologically guaranteed true or false

    • ex, X,Y smelly(X,Y)   smelly(X+1,Y-1)  smelly(X-1,Y-1)  loc(wumpus,X,Y+1).

  • Uncertain knowledge:

    • Statement which truth value is uncertain

    • The truth of the statement is merely possible, plausible or probable

    • ex,  X,Y smelly(X,Y,1)  (loc(wumpus,X+1,Y)  loc(wumpus,X-1,Y)  loc(wumpus,X,Y+1)  loc(wumpus,X,Y-1)).

    • ex, X,Y □smelly(X,Y,1)  (◊loc(wumpus,X+1,Y)  ◊loc(wumpus,X-1,Y)  ◊loc(wumpus,X,Y+1)  ◊loc(wumpus,X,Y-1)).

    • ex, X,Y,U,V smelly(X,Y,1)  (U 1 X+1)  (U 1 X-1)  (V 1 Y+1)  (V 1 Y-1)  loc(wumpus,X+1,Y,P1)  loc(wumpus,X-1,Y,P2)  loc(wumpus,X,Y+1,P3)  loc(wumpus,X,Y-1,P4)  loc(wumpus,X+1,Y,P5) loc(wumpus,X-1,Y,P6) loc(wumpus,X,Y+1,P7) loc(wumpus,X,Y-1,P8)) loc(wumpus,U,V,P9) loc(wumpus,U,V,P10) P1 = P2 = P3 = P4 = P10  P5 = P6 = P7 = P8 = P9.

    • ex,  X,Y p(loc(wumpus,X+1,Y) | smelly(X,Y,1)) = 0.25 p(loc(wumpus,X-1,Y) | smelly(X,Y,1)) = 0.25 p(loc(wumpus,X,Y+1) | smelly(X,Y,1)) = 0.25 p(loc(wumpus,X,Y-1) | smelly(X,Y,1)) = 0.25.


Precise x vague knowledge

Precise x Vague Knowledge

  • Precise (or crisp) knowledge:

    • size(wumpus, 2.80), loc(wumpus) = (X,Y)

  • Vague (or soft) knowledge:

    • tall(wumpus), loc(wumpus) = around(X,Y)

  • Fuzzy approach to vague knowledge:

    • Class membership function of entities map to [0,1] instead of {true,false}

    • Class membership statements are atomic formula of fuzzy logic

    • Connective semantics generally defined by:

      • fuzzyValue(a  b) = min(fuzzyValue(a),fuzzyValue(b))

      • fuzzyValue(a  b) = max(fuzzyValue(a),fuzzyValue(b))

      • fuzzyValue(a) = 1 – fuzzyValue(a)

    • Example:

      • Given: fuzzyValue(tall(wumpus)) = 0.6  fuzzyValue(heavy(wumpus)) = 0.4,

      • derive: fuzzyValue(tall(wumpus)  heavy(wumpus)) = 0.4

      • but also derive: fuzzyValue(tall(wumpus)  tall(wumpus)) = 0.4

  • Debate still raging on:

    • Whether vagueness and uncertainty are orthogonal characteristics or two aspects of the same coin

    • Fuzzy sets and logic have any inherent advantage to represent either


Explicit x implicit knowledge

Explicit x Implicit Knowledge

  • Explicit knowledge:

    • Sentences in the KB

  • Implicit knowledge:

    • Axioms, simplifying assumptions, integrity constraints, commitments which are not encoded explicitly as sentences in the KB but which must hold for the KB to be a correct model of the environment

    • Should be at least present as comments in the KB or in an external documentation, but is often present only in the KB designer’s head

    • Turn the KB simpler, more computationally efficient, concise and easy to understand (for one with knowledge of the implicit assumptions), but also far less extensible and reusable.


Implicit x explicit knowledge illustrative example

Implicit x Explicit Knowledge: Illustrative Example

  • The Wumpus World agent KB sentence (explicit knowledge): see(glitter)  pick.

  • Is correct only under the following simplifying assumptions (implicit knowledge):

    • There is only one agent in the environment

    • See is a percept

    • Pick is an action

    • The scope of the see percept is limited to the cavern where the agent is correctly located

    • The gold is the sole glittering object in the environment

    • The gold is the sole object to be picked in the environment

    • The gold is a treasure

    • A treasure an object worth picking


Implicit x explicit knowledge illustrative example1

Implicit x Explicit Knowledge: Illustrative Example

  • Without these implicit assumptions, the same piece of behavioral knowledge must be represented by the far more complex sentence:

    • (A,C,T,X,Y agent(A) loc(C,[(X,Y)])  time(T) in(A,C,T)  horizCoord(X)  verticCoord(Y) percept(A,C,T,vision,glitter)O physObj(O) emit(O,glitter)  in(O,C,T))  (O physObj(O) emit(O,glitter)  ouro(O)) (O ouro(O)  treasure(O))

       (A,C,T,X,Y,O agent(A) loc(C,[(X,Y)])  time(T)  in(A,C,T)  horizCoord(X)  verticCoord(Y)  in(O,C,T)  treasure(O) chooseAction(A,T+1,pick(O))).

  • This sentence is reusable in more sophisticated versions of the Wumpus World with multiple agents, multi-cavern vision scope, and multiple treasure objects to be picked that are observable through a variety of sensors.


Declarative x procedural knowledge

Declarative x Procedural Knowledge

  • Declarative knowledge:

    • Sentences (data structures) merely declaring what is true, known or believed

    • Declarative KB: modular, unordered set of largely independent sentences which semantics is defined independently of any specific control structure

    • Combined at run time to carry out a task by the generic control structure of an inference engine

    • Rules, logical formulas, classes, relations

  • Procedural knowledge:

    • Algorithmic, step-by-step specification of how to carry out a specific task

    • Procedures, functions, workflows

    • Sub-steps combined and ordered at design time by the knowledge engineer

    • Integrate data and control structure


Common sense x expert knowledge

Common Sense x Expert Knowledge

  • Common Sense Knowledge:

    • Recurrent across domains and tasks

    • Decomposable into orthogonal aspects of the world, ex, space, time, naive physics, folks psychology, etc.

    • Shared by all humans, acquired instinctively by everyday life experience

    • ex, event calculus axioms about persistence of environment state changes following occurrences of events

      • FFluents, T2Time (holds(F,T2)  (EEvents, TTimes, happens(E,T)  initiates(E,F)  (T  T2)  clipped(F,T,T2)) (clipped(F,T,T2)  (EEvents, T1Times, happens(E,T1)  terminates(E,F,T1)  (T  T1)  (T1  T2)

  • Expert Knowledge:

    • Specialized for particular domain and task

    • Possess only by a few experts, acquired through specialized higher education and professional experience

    • ex, X,Y smelly(X,Y,1)  (loc(wumpus,X+1,Y)  loc(wumpus,X-1,Y)  loc(wumpus,X,Y+1)  loc(wumpus,X,Y-1)).


Domain level x meta level knowledge

Domain-Level x Meta-Level Knowledge

  • Domain-level knowledge:

    • Knowledge modeling the agent’s environment and used by it reason and take autonomous decisions

  • Meta-level knowledge:

    • Knowledge about domain-knowledge level

    • Explicitly Describes:

      • Its structure (reuse meta-knowledge)

      • Its assumptions and limitations (reuse meta-knowledge)

      • How reason with it efficiently (control meta-knowledge)

      • How to explain inferences made with it (user-interface meta-knowledge)

      • How to augment and improve it (learning meta-knowledge)


Roadmap of automated reasoning ar

Knowledge Representation Language

Ontological Commitment

  • Inference engine characterized

  • by a volume in this 3D Space:

  • can execute a subset of the reasoning tasks

  • with knowledge encoded in a language, which semantics implies certain ontological and epistemological commitments

High-Order OO

High-Order

Relational

First-Order OO

First-Order

Relational

Propositional

Deduction

Abduction

Inheritance

Belief

Revision

Belief

Update

Constraint

Solving

Planning

Optimization

Induction

Analogy

Fuzzy

?

Boolean Logic CWA

Boolean Logic OWA

Ternary Logic CWA

Ternary Logic OWA

Reasoning Task

Probabilistic

Plausibilistic

Possibilistic

Knowledge Representation Language

Epistemological Commitment

Roadmap of Automated Reasoning (AR)


Dimensions of ar services

Deduction

Abduction

Inheritance

Belief

Revision

Planning

Belief

Update

Constraint

Solving

Optimization

Induction

Analogy

Dimensions of AR Services

From: X,Y p(X,a)  q(b,Y)  r(X,Y)

 p(1,a)  q(b,2)

Deduce: r(1,2)

From: A si(A)  do(A,k)  sj(A)

 p(A)  si(A)

 p(a)

Initially believe: si(a)

But after executing do(a,k)

Update belief si(a) into sj(a)

From: X,Y,Z  N X+Y=Z  1X  XZ  XY  YZ  Z 7w/ utility(X,Y,Z) = X + Z

Derive optimum: X=2  Y=4  Z=6

From: X,Y p(X,a)  q(b,Y)  r(X,Y)  p(X,c)  n(Y)  r(X,Y)

 p(1,a)  r(1,2)  p(1,c)

w/ bias: q(A,B)

Abduce: q(b,2)

From: p(1,a) q(b,2) r(1,2)  p(1,c)  n(2)

...

 p(3,a)  q(b,4) r(3,4)  p(3,c)  n(4)

w/ bias: F(A,B)  G(C,D)  H(A,D)

Induce: ~X,Y p(X,a)  q(b,Y)  r(X,Y)

From: A sa(A)  do(A,i)  sb(A) ... sh(A)  do(A,k)  sg(A)  sa(a)  goal(a) = sg(a)

Plan to execute: [do(a,i), ... , do(a,k)]

From: G G instanceOf g  p(G)

 s subclassOf g  s1 instanceOf s

Inherit: p(s1)

From: a ~1 b  a ~2 c  a ~11 d

 p(b,x)  p(c,x)  p(d,y)

Derive by analogy: p(a,x)

From: X p(X) ~ r(X)  q(X)  n(X)  (r(X)  n(X))  p(a)

Believe by default: r(a)

But from new fact: q(a)Revise belief r(a) into n(a)

Solve: X,Y,Z N X+Y=Z 1X XZ  XY YZ  Z 7

Into: X=2  3Y  Y 4  5 Z  Z6

or (X=2  Y=3  Z=5)  (X=2  Y=4  Z=6)

Reasoning Task


Epistemological commitment

Probabilistic

Plausibilistic

Possibilistic

Ternary Logic OWA

Boolean Logic OWA

Ternary Logic CWA

Epistemological Commitment

  • Open-World Assumption (OWA):

    • f  KB xor f  KB

    • ask(q) = true iff KB |= q

    • ask(q) = false iff KB |= q

    • ask(q) = unknown iff(KB | q)  (KB |q), when agent does not know enough to conclude

    • Logically sound

    • w/ Boolean logic, requires agent to always possess enough knowledge to derive truth of any query

  • Closed-World Assumption (CWA):

    • f  KB (only positive facts)

    • From: KB | q

    • Assume: q is false (under naf and not  semantics)

    • Not logically sound

    • Negation As Failure (NAF) connective: naf f = true iff KB | f

    • If KB = (naf p  q)  (naf q  p),then ask(p) = ask(q) = unknown

    • Thus CWA with naf can require ternary logic

Boolean Logic CWA


Epistemological commitment1

Probabilistic

Plausibilistic

Possibilistic

Ternary Logic OWA

Boolean Logic OWA

Ternary Logic CWA

Epistemological Commitment

  • Possibilistic commitment

    • Unary modal connectives:

    • □f, f is necessarily true

    • ◊f, f is possibly true

    • inference rules to combine them with classical connectives

  • Plausibilistic commitment

    • (Partial) order, “strenght of belief” rank the plausibility of each formula

    • inference rules to derive plausibility of a complex formula with connectives from its atoms

  • Probabilistic commitment

    • Element of [0,1] give probability of truth for each formula

    • Laws of probability applied to derive probability of complex formula from its atom

Boolean Logic CWA


Ontological commitment

Ontological Commitment

  • Propositional:

    • Only propositions with no internal structure, simple symbol (i.e., whole KB can only describe properties of one individual instance)

    • No variables, relations, classes or objects

    • ex,rain  wetGrass

  • First-Order Relational:

    • Predicates (relations) with universally quantified variable arguments and recursive functions, but no structural aggregation of properties nor distinguished generalization relation

    • ex, D,G day(D)  rain(D)  ground(G) state(G,D,wet)

  • First-Order Object-Oriented:

    • Classes, sub-classes, attributes, associations (relations), operations, objects, links

    • ex, D:day[weather -> rain]  G:ground  G[state(D) -> wet].

High-Order

OO

High-Order

Relational

First-Order

OO

First-Order

Relational

Propositional


Ontological commitment1

Ontological Commitment

  • High-Order Relational:

    • Universally quantified variables in predicates, functions, and formula positions

    • ex,R,X,Y trans(R)(X, Y)  (R(X, Y)  (R(X, Z)  trans(R)(Z,Y))

  • High-Order Object-Oriented:

    • Universally quantified variables not only as object names, but also as class names, attribute names, association names, operation names

    • G[A => T1, M(P:T2) => T2]  trans(“:”)(S,G) S[A => T1, M(P:T2) => T2]

High-Order

OO

High-Order

Relational

First-Order

OO

First-Order

Relational

Propositional


Kba architectures

A KBA is a:

Reflex Agent?

Automata Agent?

Goal-Based Agent?

Planning Agent?

Hybrid Agent?

Utility-Based Agent?

Adaptive Agent?

Layered Agent?

Can be anyone !

Is there any constraint between the reasoning performed by the inference engine and the agent architecture?

Adaptive agent requires analogical or inductive inference engine

KBA Architectures


Non adaptive kba

Non-Adaptive KBA

Environment

Persistent Knowledge Base (PKB):

rules, classes, logical formulas or probabilities

representing generic laws about environment class

Sensors

Ask

Inference Engine for Deduction, Abduction,

Inheritance, Belief Revision, Belief Update,

Planning, Constraint Solving or Optimization

Non-Monotonic

Engine

Ask

Tell

Retract

Volatile Knowledge Base (VKB):

facts, objects, constraints, logical formulas or probabilities

representing environment instance in current agent execution

Effectors


Analogical kba

Analogical KBA

Environment

Persistent Knowledge Base (PKB):

facts, objects, constraints, logical formulas or probabilities

representing environment instances in past agent executions

structured by similarity measure

Sensors

Ask

Inference Engine for Analogy

Tell

Ask

Retract

Volatile Knowledge Base (VKB):

facts, objects, constraints, logical formulas or probabilities

representing environment instance in current agent execution

Effectors


Remember the planning agent

Remember the Planning Agent?

Environment

(Past and)Current

Environment

Model

Percept Interpretation

Rules: percept(t)  model(t)  model’(t)

Sensors

Model Update

Rules:model(t-1)  model(t)

model’(t)  model’’(t)

Goal Update

Rules:model’’(t)  goals(t-1)  goals’(t)

Goals

Prediction of Future Environments

Rules: model’’(t)  model(t+n)

model’’(t)  action(t)  model(t+1)

Hypothetical

Future

Environment

Models

Action Choice

Rules: model(t+n) = result([action1(t),...,actionN(t+n)]

model(t+n) goal(t)  do(action(t))

Effectors


How would be then a knowledge based planning agent

Environment

Sensors

PKB: PerceptInterpretation

VKB:

Past and Current

Environment Models

PKB: Environment

Model Update

PKB: Goals

Update

Inference

Engine

VKB: Goals

VKB:

Hypothetical Future

Environment Models

PKB: Prediction of

Future Environments

PKB: Acting

Strategy

Effectors

How would be then aknowledge-based planning agent?


Alternative planning kba architecture

Alternative Planning KBA Architecture

Environment

PKB: PerceptInterpretation

Inference

Engine 1

Sensors

VKB:

Past and Current

Environment Models

PKB: Environment

Model Update

Inference

Engine 2

PKB: Goals

Update

Inference

Engine 3

VKB: Goals

VKB:

Hypothetical Future

Environment Models

PKB: Prediction of

Future Environments

Inference

Engine 4

PKB: Acting

Strategy

Inference

Engine 5

Effectors


Why using multiple inference engines

Why Using Multiple Inference Engines?

Environment

PKB: PerceptInterpretation

Abduction

Inference

Engine 1

Sensors

VKB:

Past and Current

Environment Models

PKB: Environment

Model Update

Belief

Update

Inference

Engine 2

PKB: Goals

Update

Inference

Engine 3

Deduction

VKB: Goals

VKB:

Hypothetical Future

Environment Models

PKB: Prediction of

Future Environments

Inference

Engine 4

Constraint

Solving

PKB: Acting

Strategy

Optimization

Inference

Engine 5

Effectors


How to acquire knowledge

How to Acquire Knowledge?

  • Development time:

    • Persistent knowledge and initial volatile knowledge

      • Manually by direct coding

      • Semi-automatically through a knowledge acquisition interface

      • Using a knowledge engineering methodology

      • Semi-automatically with machine learning (off-line induction, analogy and reinforcement learning in simulated situations)

      • Using a knowledge discovery methodology

  • Run time:

    • Volatile knowledge

      • Automatically through perceptions and deduction, abduction, inheritance, belief revision, belief update, constraint solving, optimization or analogy

    • Persistent knowledge

      • Automatically through machine learning (analogy, on-line induction or situated reinforcement learning)


Knowledge engineering

Knowledge Engineering

  • Develop methodologies, processes and tools to built knowledge bases and knowledge base systems

  • Many common issues with software engineering:

    • Robustness, scalability, extensibility, reusability

    • Distributed development, trade-off between quality, cost and time

  • Added difficulties of knowledge engineering:

    • Non-computing domain expert not contributing merely requirements (what to do?) but often the core knowledge (how to do it?), (s)he is thus a critical part of the development team

    • Users not only needs to use the system but also to understand how it reason

    • Lack of standard knowledge representation languages and industrial strength CAKE tools

    • Declarative knowledge processed by non-deterministic engines harder to debug than step-by-step algorithms (more is left to the machine)

  • Common paradigms: object-oriented methods, formal methods

  • Most processes: spiral development at 3 abstraction levels:

    • Knowledge level, formalization level, implementation level


Knowledge base engineering

Knowledge Base Engineering

Knowledge Elicitation

  • Knowledge level:

  • Using the vocabulary of the domain experts

  • Natural language, domain-specific graphical notation

Knowledge Formalization

  • Formal level:

  • Unambiguous notation w/ mathematical formal semantics (Logic, probability theory)

  • Consistency verification

  • Semi-formal level:

  • Standard structured textual notation (XML)

  • Standard graphical notation (UML)

  • Validation with Expert

Knowledge Implementation

  • Implementation:

  • Inference engine or programming language

  • Prototype testing


Knowledge base engineering1

Knowledge Base Engineering

Knowledge Elicitation

Knowledge level:

Using the vocabulary of the domain experts

Natural language, domain-specific graphical notation

Knowledge Formalization

Formal level:

Unambiguous notation w/ mathematical formal semantics (Logic, probability theory)

Consistency verification

Semi-formal level:

Standard structured textual notation (XML)

Standard graphical notation (UML)

Validation with Expert

Knowledge Implementation

Implementation:

Inference engine or programming language

Prototype testing


Knowledge base engineering2

Knowledge Base Engineering

  • Structured interviews with domain expert

  • Data preparation

Elicitação do conhecimento

Knowledge level:

Using the vocabulary of the domain experts

Natural language, domain-specific graphical notation

  • Ontologies

  • Semi-formal KR languages

  • Formal KR Language

  • Machine Learning

Formalização do conhecimento

Formal level:

Unambiguous notation w/ mathematical formal semantics (Logic, probability theory)

Consistency verification

Semi-formal level:

Standard structured textual notation (XML)

Standard graphical notation (UML)

Validation with Expert

  • Compilers

  • Inference Engines

  • Machine Learning

Implementação do conhecimento

Implementation:

Inference engine or programming language

Prototype testing


Off line inductive agent training phase

Off-Line Inductive Agent:Training Phase

Ask

InductiveInference Engine

Intentional Knowledge Base (IKB):

rules, classes or logical formulas

representing generic laws

about environment class

Tell

Hypothesis

Formation

Retract

Ask

Hypothesis

Verification

Data, Examples or Case Basefacts, objects, constraints or

logical formulas codifying

representative sample of

environment entities

Ask

Performance

Inference Engine:Any Reasoning Task

Except Analogy

and Induction

Tell

Retract

Ask


Off line inductive agent usage phase

Off-Line Inductive Agent: Usage Phase

Inductively Learned

Persistent Knowledge Base (PKB):

rules, classes, logical formulas or probabilities

representing generic laws about environment class

Environment

Sensors

Ask

Inference Engine for Deduction, Abduction,

Inheritance, Belief Revision, Belief Update,

Planning, Constraint Solving or Optimization

Ask

Tell

Retract

Volatile Knowledge Base (VKB):

facts, objects, constraints, logical formulas or probabilities

representing environment instance in current agent execution

Effectors


  • Login