1 / 27

Introduction to AI & AI Principles (Semester 1) WEEK 10 – Tuesday/Wednesday (2008/09)

Introduction to AI & AI Principles (Semester 1) WEEK 10 – Tuesday/Wednesday (2008/09). John Barnden Professor of Artificial Intelligence School of Computer Science University of Birmingham, UK. Tuesday/Wednesday. Continue existing discussion of logical reasoning.

izzy
Download Presentation

Introduction to AI & AI Principles (Semester 1) WEEK 10 – Tuesday/Wednesday (2008/09)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to AI &AI Principles (Semester 1)WEEK 10 – Tuesday/Wednesday(2008/09) John Barnden Professor of Artificial Intelligence School of Computer Science University of Birmingham, UK

  2. Tuesday/Wednesday • Continue existing discussion of logical reasoning. • Outline a major AI logical-reasoning method and its advantages. • Compare & contrast logic and production systems. • View reasoning, whether in logic or PSs, as search. • Additional matters concerning PSs. • New topic: Semantic Networks.

  3. Deductive Reasoning in Logic (contd.)

  4. Reminder: Some Difficulties • What inference rule to apply when, • and exactly how (e.g., what variable instantiations to do)?? • i.e., hefty search process: how to guide it? • NB: Searching backwards from a reasoning goal is generally beneficial. (Backwards chaining.) • Lots of fiddling around, piecing together and taking apart conjunctions, disjunctions, etc. • And have only shown some of the types of fiddling that are needed! • It would be more convenient to be able combine the effects of certain inference rules in various ways, e.g. to combine MP and variable instantiation.

  5. Reminder: Part of Proof Tree for an example(Proof “Graph” more generally) shld-prot(Ego, P123) shld-call(Ego, Pol)  shld-prot(Ego, P123) Conj Elim Modus Ponens • is-pers(P123)  • asleep(P123)  (unconsc(P123))  dead(P123)) Conj Intro • (is-pers(P123)  • asleep(P123)  (unconsc(P123))  dead(P123))) •  (shld-call(Ego, Pol)  should-prot(Ego,P123)) • is-pers(P123) • asleep(P123) UnivElim (p:P123) unconsc(P123))  dead(P123) • (p) ((is-pers(p) • asleep(p)  (unconsc(p))  dead(p))) • (shld-call(Ego, Pol)  should-prot(Ego,p))) Disj Intro unconsc(P123))

  6. “Natural Deduction” = the sort of reasoning process we have seen so far

  7. Reminder:Some Other Sorts of Fiddling • Following are logical inference rules • in non-traditional IF-THEN form. • Rules about distributivity of conjunction and disjunction over each other. • IF-HAVEA  (B C) THEN-HAVE (A  B)  (A  C) • and its converse • IF-HAVE (A  B)  (A  C) THEN-HAVE A  (B C) • IF-HAVEA  (BC) THEN-HAVE (A  B)  (A  C) • and its converse • IF-HAVE(A  B)  (A  C)THEN-HAVE A  (BC)

  8. Reminder:Some Other Sorts of Fiddling, contd. • Double negation: • IF-HAVE  ATHEN-HAVE A • IF-HAVEATHEN-HAVE   A • De Morgan’s Laws (in inference-rule form): • IF-HAVE(A  B)THEN-HAVE A  B • and its converse • IF-HAVEA  B THEN-HAVE (A  B) • IF-HAVE(A  B)THEN-HAVE A  B • and its converse • IF-HAVEA B THEN-HAVE (A  B)

  9. (New) Other Sorts of Fiddling, contd. • An analogue of De Morgan’s Laws for interchange of universal and existential quantification: • IF-HAVE(α) ATHEN-HAVE (α)A • and its converse • IF-HAVE(α)A THEN-HAVE (α) A • IF-HAVE(α)A THEN-HAVE (α)A • and its converse • IF-HAVE(α)A THEN-HAVE (α)A • That is, can push a negation sign through a quantifier in either direction if switch universal to existential and vice versa. •       •      

  10. A Partial Response to the Difficulties • Use essentially just one inference rule: resolution, • combining variable instantiation with a generalization of MP. • (See Callan book for details if interested.) • Also, in effect, reason backwards from the goal, using reasoning by contradiction(“Reductio ad Absurdum”). • i.e., you assume the negation of the goal G, and • show that you can infer a contradiction • (basically, show that you can infer something Cand its negation not-C). • Reasoning by contradiction is often used in human common-sense reasoning as well as in mathematics. • In the resolution proof method, the reasoning to the contradiction is done by (mainly) applications of resolution.

  11. Resolution Proof Method contd. • Need to have converted all formulas in the knowledge base into clause form: each item in the knowledge base is now either a literal (a predicate-symbol application or a negation of one) or a disjunction of literals. All variables are considered universally quantified. A special treatment of existential quantification (Skolemization) is needed for this. • Simple Example of a clause: • is-person(p) is-shop(q)  old-fashioned(q)  likes(p,q) • The negation of the reasoning goal is also converted into clause form. • The conversion into clause form effectively absorbs the above sorts of annoying “fiddling” – introduction/elimination of conjunction and disjunction, distribution, De Morgan, etc. • The conversion is complex but only needs to be done once for KB items.

  12. Proof Diagram (KB items underlined) NEGATED GOAL: likes(S, G) • is-pers(p) is-shop(q)  old-fash(q)  likes(p,q) Resol (q:G, p:S) • is-pers(S) is-shop(G)  old-fash(G) Resol ( ) is-shop(G) • is-pers(S)  old-fash(G) is-pers(S) is-pers(S) Resol ( ) old-fash(S, G) old-fash(S, G) Resol ( ) CONTRADICTION (= null clause)

  13. Benefits of Resolution Proof Method • Reasons backwards from the goal, thereby focussing the search. • Sidelines some annoying fiddling (absorbs most of it into the reduction to clause form, & some into resolution). • Combines variable substitution with other inference acts in an intuitive and algorithmically convenient way (in the resolution inference rule). • By ensuring that each step is a bigger chunk, the overall search is simplified (though still difficult) compared to what is needed with Natural Deduction (ND). • Because of bigger chunks, and having essentially just one inference rule, the task of finding useful search-guidance heuristics is simplified compared to ND. • Clause form is somewhat unnatural, but resolution is itself quite natural once you get used to it. • ND is probably better for human use, whereas the resolution proof method is probably better for machine use. However, ND has been implemented in AI.

  14. Procedural/Declarative Trade-Off • Recall: inference rules are mechanisms – they do something. They’re “procedural.” • Implications () and double implications () are just statements. They’re “declarative.” They don’t do anything all by themselves. • And recall we can rewrite an implication, • L  R, • as a disjunction: • L  R. • Need to apply an inference rule such as Modus Ponens to get an implication to deliver a conclusion concerning left-hand or right-hand side. • (Double implication: either need another inference rule similar to MP, or have to go to considerably more complication using something like • (L  R)  (L  R) • and two applications of MP: one to get L  R , the other to get R.)

  15. P/D Trade-Off, contd. • Consider again inference rules such as (part of De Morgan): • (R1) IF-HAVE(A  B)THEN-HAVE A  B • (R2) IF-HAVEA  B THEN-HAVE (A  B) • What if we used the following implications instead (or one double implication): • (F1)(A  B)  A B • (F2) A  B  (A  B) • Suppose we’re given( happy(Mike)  rich(Peter) ) • How do we gethappy(Mike)  rich(Peter) ?? • First create right instance of(F1): • (happy(Mike)  rich(Peter))  happy(Mike)  rich(Peter) • Now apply MP to this and to the given formula. • So we have the trade-off between (a) simply applying (the appropriate instance of) R1 directly to the given formula and (b) applying MP to (the appropriate instance of) F1 and the given formula.

  16. Using Special De Morgan Rule happy(Mike)  rich(Peter) R1 ( happy(Mike)  rich(Peter) ) Using an Implication plus MP happy(Mike)  rich(Peter) MP ( happy(Mike)  rich(Peter) ) (happy(Mike)  rich(Peter))  happy(Mike)  rich(Peter)

  17. P/D Trade-Off, contd. • So we have the trade-off between (a) simply applying (the appropriate instance of) R1 directly to the given formula (b) applying MP to (the appropriate instance of) F1 and the given formula. • Method (a) is simpler and somewhat less work, but means having an extra inference rule to design and to include in processing. • Also note: instead of, say, the formula (x) (is-shop(x)  likes(S, x)) • could havethe inference rule IF-HAVEis-shop (x)THEN-HAVElikes(S, x)[although domain-specific and unsound, and hence unlike trad inference rules, but rules of this sort do appear in special, advanced logics]. • This inference rule is less work to apply (much as above), but doesn’t allow an inference from, say, likes(S, x) tois-shop(G). • Moreover the formula could itself be a result of other reasoning, and apply only under certain conditions, rather than being permanently in play. • General lesson: more procedural approaches can be simpler in some ways and more efficient, but can be less flexible in some ways.

  18. Logiccompared/contrasted toProduction Systems

  19. Logic versus Production Systems • Rules in PSs and inference rules in logic are at one level exactly the same sort of thing, and it’s hard to make a firm general distinction. You could in principle regard PSs as a form of logic. (“Logic” is not itself a logically watertight category.) • PSs are much more procedural than logic tends to be. In PSs, long-term domain knowledge tends very strongly to be put into rules, whereas in logic such knowledge tends more to be put into formulas • and in the simpler, more trad forms of logic it is always put into formulas. • The rules in PSs are generally domain specific and unsound, whereas in logic the rules are much more likely to be domain-neutral and sound • and in the simpler, more trad forms of logic they must be domain-neutral and sound. • Relatedly, rules in PSs are often regarded merely as default rules (or: as defeasible): their effect is not regarded as absolutely definite, and is subject to defeat (cancellation) by the effect of other rules. • You need to quite advanced forms of logic to get similar abilities. And disciplinary tradition causes much more anxiety over the mathematical underpinnings of such flexible logics than of PSs.

  20. Reasoning in Logic or Production Systemsviewed as a case ofSearch

  21. Reasoning (in Logic or PSs) as Search • Recall: in a search problem we have: • states, including the picking out of one as the initial state • operations – ways of converting a state into a (usually different) state • operation costs (or whole-path costs) – in search problems where operations correspond to actions in a “world” outside the search itself • one or more individually specified goal states, OR a goal condition • a specification of the precise task – e.g. return an optimal solution path, return a reasonably good solution path; return a goal confirming to the goal condition; return the best goal conforming to the goal condition; see whether one or more goals can be reached at all; etc.

  22. The Case of Reasoning • a state = a collection of propositions expressed in some way; • a state could be the contents of a PS’s WM at some particular point. • initial state = contents of an initial WM or of (a portion of) an initial KB • plus possibly other things, such as “(sub)goals” in the reasoning sense (hypotheses to be investigated) or clauses for the negated goal (in the resolution proof method) • operations = inference rules (or ways of assuming things, or mechanisms for simplifying a proposition, simplifying a state, doing other clean-up operations, etc.) • but NB we must take the effect of the rule (or whatever) to be a whole new state, not just the propositions that are emitted by the rule • costs: (usually) not applicable in non-planning reasoning tasks, because operations (usually) do not represent actions in some world outside the search: the operations are the actions of interest • (of course, doing an operation has a computational cost, and differences in this cost might come into decisions about what operations to try when)

  23. PS or Logic: Initial State holds3(Ego, B) distinct(K,S) in(Ego, K) ……….. ……….. R4 (a PS rule or logic inference rule) holds3(Ego, B) holds0(Ego, B) distinct(K,S) in(Ego, K) ………….. ………….. R2 holds3(Ego, B) holds0(Ego, B) distinct(K,S) in(Ego, K), in(B, K) …………… …………… R5 holds3(Ego, B) holds0(Ego, B) distinct(K,S) in(Ego, K), in(B, K), in(B, S) …………..

  24. The Case of Reasoning, contd. • a goal condition is appropriate, rather than one or more individually specified goal states: the condition could be that the state include • a particular formula being investigated (i.e., a goal formula), or • a particular type of formula (e.g., stating something the agent should do), or • anything that looks interesting/good/bad/… in some way, or … • the task: • Normally and mainly (unless doing planning), • to determine whether a goal state can be reached. • Possibly also to return one or more solution paths. • Why? E.g.: • Provide an explanation to a human user. • Learn from the path things that could be useful for later reasoning tasks. • Note: planning can be viewed as a form of reasoning, and then a solution path will correspond to a world action path, and so is of course (an important part of) the answer.

  25. The Case of Reasoning, contd. • If an operation can never modify or delete anything in the state worked upon, and can never prevent any later application of any operation or affect its result, there’s never any need to backtrack or otherwise switch to another part of the search space. • Just keep accumulating propositions added by operations. • The choice issue is not choice between states but choice of what propositions to apply operations to (and how exactly to apply them to those propositions). Usually impractical to apply all possible operations in all possible ways. I.e., we don’t generally expand a state fully. • Otherwise, we can’t in general just keep accumulating, so we have the usual between-state choice issue as well as the above. • Example on next slide.

  26. Each ASSUME prevents the other and deletes something in the state Initial State may-be-married(S, P) may-be-married(S, G) is-policeman(P) is-gardener(G) ……… ASSUME: are-married(S,P) [[ may-be-married(S, P) ]] are-married(S, P) may-be-married(S, G) is-policeman(P) is-gardener(G) ……… ASSUME: are-married(S,G) may-be-married(S, P) are-married(S, G) [[ may-be-married(S, G) ]] is-policeman(P) is-gardener(G) ………

  27. More on Production Systems(separate slides by John Bullinaria) • Recognize/Act cycle for forward chaining (p.4) • Need for a Reason Maintenance system (p.12) • Choice between Forwards and Backwards (pp.13-14) • Conflict Resolution (pp.15 onwards, but excluding meta-rules)

More Related