1 / 25

Lecture 42

Lecture 42. Consciousness Final Review Part 1 of 2. Monday, 08 December 2003 William H. Hsu Department of Computing and Information Sciences, KSU http://www.kddresearch.org http://www.cis.ksu.edu/~bhsu Reading: None

vanna
Download Presentation

Lecture 42

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 42 Consciousness Final Review Part 1 of 2 Monday, 08 December 2003 William H. Hsu Department of Computing and Information Sciences, KSU http://www.kddresearch.org http://www.cis.ksu.edu/~bhsu Reading: None Final Review: Chapters 1-15, 18-19, 23, 24 R&N (emphasis on 14-15, 18-19)

  2. Lecture 1:The Intelligent Agent Framework • Artificial Intelligence (AI) • Operational definition: study / development of systems capable of “thought processes” (reasoning, learning, problem solving) • Constructive definition: expressed in artifacts (design and implementation) • Intelligent Agents • Topics and Methodologies • Knowledge representation • Logical • Uncertain (probabilistic) • Other (rule-based, fuzzy, neural, genetic) • Search • Machine learning • Planning • Applications • Problem solving, optimization, scheduling, design • Decision support, data mining • Natural language processing, conversational and information retrieval agents • Pattern recognition and robot vision

  3. Lecture 2:Agents and Problem Solving • Agent Frameworks • Reactivity vs. state • From goals to preferences (utilities) • Applications and Automation Case Studies • Search: game-playing systems, problem solvers • Planning, design, scheduling systems • Control and optimization systems • Machine learning: pattern recognition, data mining (business decision support) • Things to Check Out Online • Resources page: www.kddresearch.org/Courses/Fall-2001/CIS730/Resources • Yahoo! Group discussions: groups.yahoo.com/group/ksu-cis730-fall2001 • Suggested project topics, resources – posted in YG

  4. Lecture 3:Search and Constraints • Today’s Reading: Sections 3.5-3.8, Russell and Norvig • Solving Problems by Searching • Problem solving agents: design, specification, implementation • Specification components • Problems – formulating well-defined ones • Solutions – requirements, constraints • Measuring performance • Formulating Problems as (State Space) Search • Example Search Problems • Toy problems: 8-puzzle, 8-queens, cryptarithmetic, toy robot worlds, constraints • Real-world problems: layout, scheduling • Data Structures Used in Search • Uninformed Search Algorithms: BFS, DFS, Branch-and-Bound • Next Class: Informed Search Strategies • State space search handout (Winston) • Search handouts (Ginsberg, Rich and Knight)

  5. Lecture 4:Uninformed Search Algorithms • Search • Problem formulation: state space (initial / operator / goal test / cost), graph • State space search approaches • Blind (uninformed) • Heuristic (informed) • Applications • Problem solving • Optimization • Scheduling • Design • Machine learning (hypothesis space search) • More Resources Online • http://www-jcsu.jesus.cam.ac.uk/~tdk22/project • See also http://groups.yahoo.com/group/ksu-cis730-fall2001 (“REFERENCES”) • Course Project Guidelines Posted in YG • Part I: format • Part II: writing quality and criteria • Part III: resources and suggested topics

  6. Lecture 5:Heuristic Search Algorithms – Greedy, A* • More Heuristic Search • Best-First Search • Greedy • A/A* • Search as function maximization • Problems: ridge; foothill; plateau, jump discontinuity • Solutions: macro operators; global optimization • Constraint Satisfaction Search • Next Class: IDA*, Hill-Climbing, Iterative Improvement • Gradient descent • Global search • MCMC: intuition • Some examples of state-of-the-art applications • Properties and tradeoffs

  7. Lecture 6:More Heuristic Search – A*, Hill-Climbing / SA • More Heuristic Search • Best-First Search: A/A* concluded • Iterative improvement • Hill-climbing • Simulated annealing (SA) • Search as function maximization • Problems: ridge; foothill; plateau, jump discontinuity • Solutions: macro operators; global optimization (genetic algorithms / SA) • Next Class: Constraint Satisfaction Search, Heuristic Search • Next Week: Adversarial Search (e.g., Game Tree Search) • Competitive problems • Minimax algorithm

  8. Lecture 7:Constraint Satisfaction Problems • Constraint Satisfaction Problems (CSPs) • Problem definition • Domain • Constraints • Examples: N-queens, cryptarithmetic, etc. • Issues to be Covered Later • Knowledge representation: how to express domain, constraints • Relational constraints • In classical logic (propositional, predicate, first-order) • In uncertain reasoning • Solving CSPs • Propositional constraints: satisfiability solver • First-order relational constraints: difficulties – later • Speeding up CSPs: iterative improvement • Gradient (hill-climbing) optimization • Simulated annealing

  9. Lecture 8:Game Tree Search: Minimax • Game Graph Search • Frameworks • Two-player versus multi-player • Zero-sum versus cooperative • Perfect information versus partially-observable (hidden state) • Concepts • Utility and representations (e.g., static evaluation function) • Reinforcements: possible role for machine learning • Game tree: node/move correspondence, search ply • Family of Algorithms for Game Trees: Minimax • Propagation of credit • Imperfect decisions • Issues • Quiescence • Horizon effect • Need for (alpha-beta) pruning

  10. Lecture 9:More Game Tree Search: -, Expectiminimax • Games as Search Problems • Frameworks • Concepts: utility, reinforcements, game trees • Static evaluation under resource limitations • Family of Algorithms for Game Trees: Minimax • Static evaluation algorithm • To arbitrary ply • To fixed ply • Sophistications: iterative deepening, pruning • Credit propagation • Intuitive concept • Basis for simple (delta-rule) learning algorithms • State of The Field • Uncertainty in Games: Expectiminimax and Other Algorithms

  11. Lecture 10:Logical Agents and Knowledge Representations • Logical Agents • Knowledge Bases (KB) • Logic in general • Representation languages, syntax • Inference systems • Calculi • Propositional • First-order (FOL, FOPC) • Possible Worlds • Entailment • Models • IA Toy Worlds • Wumpus world • Blocks world

  12. Lecture 11:Propositional and Predicate Logic • Logical Frameworks • Knowledge Bases (KB) • Logic in general: representation languages, syntax, semantics • Propositional logic • First-order logic (FOL, FOPC) • Model theory, domain theory: possible worlds semantics, entailment • Normal Forms • Conjunctive Normal Form (CNF) • Disjunctive Normal Form (DNF) • Horn Form • Proof Theory and Inference Systems • Sequent calculi: rules of proof theory • Derivability or provability • Properties • Soundness (derivability implies entailment) • Completeness (entailment implies derivability)

  13. Lecture 12:Foundations of First-Order Logic • FOL in Practice • FOL agents • Example: Wumpus World in FOL • Situation calculus • Frame problem and variants (see R&N sidebar) • Representational vs. inferential frame problems • Qualification problem: “what if?” • Ramification problem: “what else?” (side effects) • Successor-state axioms • Logical Languages • Propositional logic • Predicates, terms, functions, atoms (atomic sentences / atomic WFFs), WFFs • First-order logic (FOL, FOPC): universal and existentialquantification

  14. Lecture 13:First-Order Knowledge Bases • Properties of Knowledge Bases (KBs) • Satisfiability and validity • Entailment and provability • Properties of Proof Systems: Soundness and Completeness • Normal Forms: CNF, DNF, Horn; Clauses vs. Terms • Frame, Ramification, Qualification Problems

  15. Lecture 14:Resolution Theorem Proving • Resolution Theorem Proving • Conjunctive Normal Form (clausal form) • Inference rule • Single-resolvent form • General form • Proof procedure: refutation • Decidability properties • FOL-SAT • FOL-NOT-SAT (language of unsatisfiable sentences; complement of FOL-SAT) • FOL-VALID • FOL-NOT-VALID • Next Class • More Prolog • Implementing unification

  16. Lecture 15:Logic Programming Techniques • Properties of Proof Systems (Again) • Soundness and completeness • Decidability, semi-decidability, undecidability • Resolution • Refutation • Satisfiability, Validity • Unification • Occurs check • Most General Unifier • Prolog: Tricks of The Trade • Demodulation, paramodulation • Unit resolution, set of support, input / linear resolution, subsumption • Indexing (table-based, tree-based)

  17. Lecture 16:Classical Planning • Classical Planning • Planning versus search • Problematic approaches to planning • Forward chaining • Situation calculus • Representation • Initial state • Goal state / test • Operators • Efficient Representations • STRIPS axioms • Components: preconditions, postconditions (ADD, DELETE lists) • Clobbering / threatening • Reactive plans and policies • Markov decision processes

  18. Lecture 17:Partial-Order Planning • Classical Planning Framework • Planning versus search • Representation: initial state, goal state / test, operators • STRIPS Operators • Components: preconditions, postconditions (ADD, DELETE lists) • STRIPS and interference • Clobbering / threatening • Promotion / demotion • Partial-Order Planners (POP systems) • Next Week • Hierarchical abstraction planning: ABSTRIPS • Conditional plans • Reactive plans and policies • Markov decision processes

  19. Lecture 18:STRIPS and ABSTRIPS • Classical Planning Framework • Planning versus search • Representation: initial state, goal state / test, operators • STRIPS Operators • Components: preconditions, postconditions (ADD, DELETE lists) • STRIPS and interference • Clobbering / threatening • Promotion / demotion • Partial-Order Planners (POP systems) • Next Week • Hierarchical abstraction planning: ABSTRIPS • Conditional plans • Reactive plans and policies • Markov decision processes Adapted from slides by S. Russell, UC Berkeley

  20. Lecture 19:Reaction and Replanning • Classical Planning Framework • Planning versus search • Representation: initial state, goal state / test, operators • STRIPS operators • Partial versus total-order: property of plans • Interleaved vs. noninterleaved: property of planners • Last Week • Hierarchical abstraction planning: ABSTRIPS • Conditional plans • This Week • Monitoring and replanning • Reactive plans and policies • Later • Decision theory • Markov decision processes

  21. Lecture 20:Reasoning under Uncertainty • Introduction to Probabilistic Reasoning • Framework: using probabilistic criteria to search H • Probability foundations • Definitions: subjectivist, objectivist; Bayesian, frequentist, logicist • Kolmogorov axioms • Bayes’s Theorem • Definition of conditional (posterior) probability • Product rule • Maximum APosteriori (MAP) and Maximum Likelihood (ML) Hypotheses • Bayes’s Rule and MAP • Uniform priors: allow use of MLE to generate MAP hypotheses • Relation to version spaces, candidate elimination • Next Week: Chapter 15, Russell and Norvig • Later: Bayesian learning: MDL, BOC, Gibbs, Simple (Naïve) Bayes • Categorizing text and documents, other applications

  22. Lecture 21:Introduction to Bayesian Networks • Graphical Models of Probability • Bayesian belief networks (BBNs) akabelief networksakacausal networks • Conditional independence, causal Markovity • Inference and learning using Bayesian networks • Representation of distributions: conditional probability tables (CPTs) • Learning polytrees (singly-connected BBNs) and tree-structured BBNs (trees) • BBN Inference • Type of probabilistic reasoning • Finds answer to query about P(x) - akaQA • Learning in BBNs: In Two Weeks • Known structure • Partial observability

More Related