1 / 55

CS3243: Introduction to Artificial Intelligence

CS3243: Introduction to Artificial Intelligence. Semester 2, 2017/2018. Admin: Midterm Exam (5 Mar). During lecture time: 2-4pm: start at 2:05 exactly! Restricted Open Book: AIMA 3rd edition textbook Lecture notes (minimally annotated) Tutorial questions & solutions

coleman
Download Presentation

CS3243: Introduction to Artificial Intelligence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS3243: Introduction to Artificial Intelligence Semester 2, 2017/2018

  2. Admin: Midterm Exam (5 Mar) • During lecture time: 2-4pm: start at 2:05 exactly! • Restricted Open Book: • AIMA 3rd edition textbook • Lecture notes (minimally annotated) • Tutorial questions & solutions • 1 A4-sized double-sided helpsheet • Bring calculator and pen! Not allowed to use pencil • No make up exam, 20% of CA, please attend!

  3. Logical Agents AIMA Chapter 7

  4. Outline • Knowledge-based agents • Logic in general: models and entailment • Propositional (Boolean) logic • Equivalence, validity, satisfiability • Inference rules and theorem proving • Resolution • Forward chaining • Backward chaining • Efficient model checking

  5. Knowledge-Based Agents • Until now – trying to find an optimal solution via search. • No real model of what the agent knows. • Chess playing agent does not know that pieces cannot move off the board (need to fully search the state space to see what moves are illegal). • This class: represent agent domain knowledge using logical formulas.

  6. Knowledge Base (KB) Inference Engine Domain-independent algorithms Knowledge Base Domain-specific content • Knowledge base = set of sentences in aformal language • Declarative approach to building an agent (or other system): • Tell it what it needs to know • Then it can Ask itself what to do - answers should follow from the KB • Agents can be viewed at the knowledge level i.e., specify knowledge and goals, regardless of implementation • Or at the implementation level • i.e., data structures in KB and algorithms that manipulate them

  7. What is the best action at time ? What did I perceive at time ? What happened? What have I done?

  8. Wumpus World Performance Measure? Environment? Actuators? Sensors?

  9. Properties of Wumpus World

  10. Exploring aWumpusWorld Agent’s view

  11. Exploring aWumpusWorld Agent’s view No Breeze! No Stench at

  12. Exploring aWumpusWorld Agent’s view

  13. Exploring aWumpusWorld Agent’s view

  14. Logic in General • Logic: formal language for KR, infer conclusions • Syntax:defines the sentences in the language • Semantics:define the “meaning” of sentences; • i.e., define truth of a sentence in a world • E.g., language of arithmetic • is a sentence; is not a sentence • is true in a world where • is false in a world where

  15. Entailment • Modeling: models if is true under . For example, what are models for the following? • We let be the set of all models for • Entailment means that one thing follows from another: or equivalently • For example: entails .

  16. Entailment in theWumpusWorld • Situation after detecting nothing in [1,1], moving right, breeze in [2,1] • Consider possible models for KB assuming only pits • 3 Boolean choices 8 possible models

  17. Wumpus Models

  18. Wumpus Models • KB = wumpus-world rules + percepts

  19. Wumpus Models • KB = wumpus-world rules + percepts • = “[1,2] is safe”, , proved by model checking • The agent can infer that [1,2] is safe

  20. Wumpus Models • KB = wumpus-world rules + percepts • = “[2,2] is safe”, • The agent cannot infer that [2,2] is safe (or unsafe)!

  21. Inference • Define to be “sentence is derived from by inference algorithm ” • Soundness: is sound if implies . “don’t infer nonsense” • Completeness: is complete if , implies . “If it’s implied, it can be inferred” • Inference algorithm: is a sentence is derived from ? “Entailment is like the needle () being in the haystack () and inference is like finding it” Is an inference algorithm complete and sound?

  22. Completeness: is complete if whenever , it is also true that All possible sentences entailed by • An incomplete inference algorithm cannot reach all possible conclusions • Equivalent to completeness in search (chapter 3) Sentences derived from using Original

  23. Propositional Logic: Syntax • A simple logic – illustrates basic ideas • Defines allowable sentences • Sentences are represented by symbols e.g. , • Logical connectives for constructing complex sentences from simpler ones: • If is a sentence, is a sentence (negation) • Ifandare sentences: • is a sentence (conjunction) • is a sentence (disjunction) • is a sentence (implication) • is a sentence (biconditional)

  24. Propositional Logic: Semantics A model is then just a truth assignment to the basic variables. If a model has variables, how many truth assignments are there? All other sentences’ truth value is derived according to logical rules.

  25. Knowledge Base for Wumpus World • there is a pit in . • there is breeze in • Rules: • “Pits cause breezes in adjacent squares” KB is true iffis true

  26. Inference • Given a knowledge base, infer something non-obvious about the world. • Mimic logical human reasoning • After exploring 3 squares, we have some understanding of the Wumpus world • Inference  Deriving knowledge out of percepts Given and , we want to know if

  27. Truth Table for Inference • Does entail ? • Can we infer that is safe from pits?

  28. Inference by Truth-Table Enumeration • Depth-first enumeration of all models is sound and complete • For symbols, time complexity is , space complexity is Check all possible truth assignments

  29. Validity andSatisfiability A sentence is valid if it is true in all models, e.g., , , , Validity is connected to entailment via the Deduction Theorem: iffis valid A sentence is satisfiable if it is true in some model e.g.,, A sentence is unsatisfiable if it is true in no models e.g., Satisfiability is connected to entailment via the following: if and only if is unsatisfiable

  30. Proof Methods

  31. Applying Inference Rules : • Equivalent to a search problem • Initial state: initial • States: s • Actions: Inference rules • Transition model: The result of an action is to add the new sentence to current • Goal: contains sentence to prove • Examples of inference rules • And-Elimination (A.E.): • Modus Ponens (M.P.): • Logical Equivalences: A.E. M.P. : :

  32. Resolution for Conjunctive Normal Form (CNF) • conjunctionof “disjunctions of literals” (clauses) • E.g., • Resolution: if a literal appears in and its negation appears in , it can be deleted:(delete duplicates as necessary) • Resolution is sound and complete for propositional logic

  33. Why is Resolution for CNF Sound and Complete?

  34. Conversion to CNF: the Rules • Convert to • Convert to • Move inwards using De Morgan and double negation • Convert to • Convert to • Convert to • Convert to

  35. Resolution Algorithm • Proof by contradiction: show that is unsatisfiable What does an empty clause imply??

  36. Resolution Example Negate the premise via proof by contradiction

  37. Forward and Backward Chaining • Horn Form (restricted) • = conjunction of Horn clauses • Horn clause = definite clause or goal clause • Definite clause : • Goal clause : • e.g., • Inference with Horn clauses: forward chaining or backward chaining algorithms. Easy to interpret, run in linear time • Inference is Modus Ponens (for Horn Form): sound for Horn

  38. Forward Chaining (FC) KB of horn clauses AND-OR graph • Idea: Fire any rule whose premise is satisfied in the , add its conclusion to the , repeat until query is found

  39. Forward Chaining (FC) Algorithm • For every rule , let be the number of symbols in ’s premise. • For every symbol , let be initially • Let be a queue of symbols (initially containing all symbols known to be true. • While : • pop a symbol from ; if it is we’re done • Set • For each clause such that is in the premise of , decrement . If , add ’s conclusion to . Forward chaining is sound and complete for Horn

  40. Forward Chaining Example Iteration 1: Iteration 2: Iteration 3: Iteration 4: Iteration 5: Iteration 6: Iteration 7: Iteration 8:

  41. Proof of Completeness FC derives every atomic sentence entailed by Horn • Suppose FC reaches a fixed point where no new atomic sentences are derived • Consider the final state as a model that assigns true/false to symbols based on the inferred table • Every clause in the original is true in • Hence, is a model of • If , then is true in every model of , including

  42. Backward Chaining (BC) Backtracking depth-first search algorithm Idea: work backwards from the query • To prove by BC, • check if is known already, or • prove by BC the premise of some rule concluding • Avoid loops: check if new subgoal is already on the goal stack • Avoid repeated work: check if new subgoal • has already been proven true, or • has already failed

  43. Forward Chaining Example Hit a loop! Try something else

  44. Forward vs.Backward Chaining

  45. Proof Methods

  46. Efficient Propositional Model Checking Two families of efficient algorithms for propositional model checking: • Complete backtracking search algorithms • DPLL algorithm (Davis, Putnam, Logemann, Loveland) • Incomplete local search algorithms • WalkSATalgorithm These algorithms test a sentence for satisfiability; used for inference. Recall: Satisfiability is connected to entailment via if and only if is unsatisfiable

  47. DPLL Algorithm How are DPLL and CSP related? Determine if a given CNF formula is satisfiable Improvements over truth table enumeration: • Early termination (a) A clause is true iffany literal in it is true. (b) The formula is false if any clause is false. • Pure symbol heuristic Pure symbol: always appears with the same “sign” in all clauses. e.g., in ,andare pure; is impure. Make a pure symbol’s literal true: Doing this can never make a clause false. Ignore clauses that are already true in the model constructed so far. • Unit clause heuristic Unit clause: only one literal in the clause. The only literal in a unit clause must be true. Least constraining value Most constrained variable

  48. DPLL Algorithm Early Termination Try to apply heuristics If it doesn’t work, brute force.

More Related