Loading in 5 sec....

CSE 571: Artificial IntelligencePowerPoint Presentation

CSE 571: Artificial Intelligence

- 103 Views
- Uploaded on
- Presentation posted in: General

CSE 571: Artificial Intelligence

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

CSE 571: Artificial Intelligence

Instructor: SubbaraoKambhampati

Class Time: 12:40—1:55 M/W

rao@asu.edu

Homepage: http://rakaposhi.eas.asu.edu/cse571

Office Hours: TBD (probably M/W 8-9AM)

- “Run it as a Graduate Level Follow-on to CSE 471”
- Broad objectives
- Deeper treatment of some of the 471 topics
- More emphasis on tracking current state of the art
- Training for literature survey and independent projects

- Chapters from the 3rd edition
- First reading:

- Chapters from Koller &Friedman and Nau et. al.
- HTN Planning
- TemplatedGraphical models from

- Tutorial papers etc

- 4 main ways
- Participate in the class actively. Read assigned chapters/papers; submit reviews before the class; take part in the discussion
- Learn/Present the state of the art in a sub-area of AI
- You will pick papers from AAAI 2010 as a starting point

- Work on a semester-long project
- Can be in groups of two (or, in exceptional circumstances, 3)

- Do the Mid-term and/or Final exam

Week 1: Intro; Intelligent agent design [R&N Ch 1, Ch 2]

Week 2: Problem Solving Agents [R&N Ch 3 3.1--3.5]

Week 3: Informed search [R&N Ch 3 3.1--3.5]

Week 4: CSPs and Local Search[R&N Ch 5.1--5.3; Ch 4 4.3]

Week 5: Local Search and Propositional Logic[R&N Ch 4 4.3; Ch 7.1--7.6]

Week 6: Propositional Logic --> Plausible reasoning[R&N Ch 7.1--7.6; [ch 13 13.1--13.5]]

Week 7: Representations for Reasoning with Uncertainty[ch 13 13.1--13.5]]

Week 8: Bayes Nets: Specification & Inference[ch 13 13.1--13.5]]

Week 9: Bayes Nets: Inference[ch 13 13.1--13.5]] (Here is a fully worked out example of variable elimination)

Week 10: Sampling methods for Bayes net Inference; First-order logic start[ch 13.5; ]

Week 11: Unification, Generalized Modus-Ponens, skolemization and resolution refutation.

Week 12: Reasoning with changePlanning

Week 13: Planning, MDPs & Gametree search

Week 14: Learning

Table of Contents (Full Version)

Preface (html); chapter mapPart I Artificial Intelligence 1 Introduction 2 Intelligent Agents Part II Problem Solving 3 Solving Problems by Searching 4 Informed Search and Exploration 5 Constraint Satisfaction Problems 6 Adversarial Search Part III Knowledge and Reasoning 7 Logical Agents 8 First-Order Logic 9Inference in First-Order Logic10Knowledge RepresentationPart IV Planning 11 Planning (pdf) 12 Planning and Acting in the Real World

Part V Uncertain Knowledge and Reasoning 13 Uncertainty14 Probabilistic Reasoning 15 Probabilistic Reasoning Over Time 16 Making Simple Decisions17 Making Complex DecisionsPart VI Learning18 Learning from Observations19 Knowledge in Learning 20 Statistical Learning Methods21 Reinforcement LearningPart VII Communicating, Perceiving, and Acting 22 Communication 23 Probabilistic Language Processing 24 Perception 25 Robotics Part VIII Conclusions 26 Philosophical Foundations 27 AI: Present and Future

- Introduction
- Audio of [Aug 24, 2009] Course overview. Contains a "review" of 471 topics that I expect students to know.

- Pendulum Swings and current trends in AI
- Audio of [Aug 26, 2009] Long discussion on current trends and pendulum swings in AI

- Beyond Classical Search (Non-deterministic; Partially Observable)
- Audio of [Aug 31, 2009] Search with non-deterministic actions and search in belief space.
- Audio of [Sep 2, 2009] Belief-space search; propositional representations for belief states (CNF, DNF and BDD models) observation models; effect of observation actions on search and execution; state estimation problems (and how they become more interesting in the case of stochastic transition models).

- Online Search (which ends beyond classical search); Belief-space planning
- Audio of [Sep 9, 2009] Online Search--motivations, methods; model incompleteness considerations; need for ergodicity of the environment. Connections to Reinforcement Learning.
- Audio of [Sep 11, 2009] Issues of Conformant and conditional planners searching in belief space.

- Heuirstics for Belief Search
- Audio of [Sep 14, 2009] Heuristics for belief-space search.

- MDPs
- Audio of [Sep 16, 2009] Markov Decision Processes
- Audio of [Sep 27, 2009] MDPS continued

- Efficient/Approximate approaches for MDP solving
- Types of MDPs (for planning); RTDP
- Probabilistic Planning Competition
- FF-Hop (and FF-replan etc)
- Audio of [Sep 29, 2009] Special cases of MDPs; Classes of efficient methods for MDPs
- Audio of [Oct 2, 2009] LRTDP; FF-Hop

- Heuristics for Stochastic Planning
- Reinforcement Learning
- Audio of [Oct 5, 2009] (part 1) Use of heuristics in Stochastic Planning (part 2) Reinforcement learning start (Montecarlo and Adaptive DP; Exploration/Exploitation)
- Audio of [Oct 7, 2009] Planning--Acting--Learning cycle in the Reinforcement Learning terminology, the role of (and the difference between) simulator and model. Temporal difference learning; Generalizing TD to TD(k-step) and then to TD(\lambda) learning. Q-learning. Exploration policies for Q-learning.
- Audio of [Oct 12, 2009] Revisiting TD(Lambda); Exploration strategies for Q-learning (that make less visited states look better); Spectrum of atomic RL strategies and their dimensions of variation (in terms of DP-based vs. Sample based and exhaustive vs. 1 step look ahead. Start of factored RL models--and the advantages of representing value and policy functions in a factored fashion. Basic idea of function-approximation techniques for RL.
- Audio of [Oct 14, 2009]TD-learning and Q-learning with function approximation. Policy gradient search.

- Decision Theory & Preference Handling
- Audio of [Oct 19, 2009] Start of discussion of decision/utility theory (R&N Chap 16)
- Audio of [Oct 21, 2009] Multi-attribute utility theory; discussion of Preference handling tutorial.
- Audio of [Oct 26, 2009] Preference handling: Partial-ordering preferences; CP-nets; Preference compiliation.

- Temporal Probabilistic Models
- Audio of [Oct 28, 2009] Connecting Dynamic Bayes Networks chapter to "State Estimation" (from first part of the semester) and "Relational models" (from the part yet to come); Specifying DBNs; Types of queries on DBNs.
- Audio of [Nov 2, 2009] Discussion of exact inference based on simultaneous roll-out and roll-up in dynamic bayes nets; motivation of kalman filters from the point of view of specifyingthe parameterized distribution for continuous variables.
- Audio of [Nov 4, 2009] Discussion of particle filtering techniques for dynamic bayes networks; discussion of factored action representation methods for stochastic planning

- Statistical Learning
- Audio of [Nov 16, 2009] Foundations of statistical learning: Full bayesian; MAP and ML--and the tradeoffs. The importance of i.i.d. assumption, the importance of hypothesis prior.
- Audio of [Nov 18, 2009] Density estimation; bias-variance tradeoff; generative vs. discriminative learning; taxonomy of learning tasks.
- Audio of [Nov 23, 2009] ML estimation of parameters with complete data in bayes networks; understanding when and why parameter estimation decouples into separate problems. Incomplete data problem. The database example. The hidden variable problem--why would we focus on the hidden variable rather than learn from complete data (because we can reduce the number of parameters exponentially)
- Audio of [Nov 25, 2009] Expectation Maximization--and why it works. Variants of EM. Connections between EM and other function optimization algorithms (such as gradient descent; newton-raphson)

- Inference and Learning in Markov Nets (undirected graphical models)+ may be Markov Logic Nets
- Audio of [Nov 30, 2009] Bayesian Learning for bayes nets. Conjugate priors and their use. Bayesian prediction and how that explains the rationale behind the laplaciancorrection.Start of Markov nets--undirected graphical models. How they differ from bayes nets--easier independence condition; more straightforward definition of bayes nets. Specification of markov nets in terms of clique potentials.
- Audio of [Dec 2, 2009]Markov Networks: Expressiveness; Parameterization (product form; log-linear); Semantics; inference techniques; learning (generative case)--the need for gradient ascent; the need for inference in gradient computation
- Audio of [Dec 7, 2009]Class discussion on markov logic networks that touches on topics such as (a) is relational learning useful if we still do ground level inference? (b) the fact that learning is always easier in MLNs than MNs--and that all the MLN ideas/challenges for learning are basically holdovers from MNs (c) the tradeoffs of lifted inference (and how planning has basically abandoned lifted planning--while probabilistic models are going that direction!) a

We will skip “beyond classical search” and

start with planning

- Top-down vs. Bottom-up
- Ground vs. Lifted representation
- The longer I live the farther down the Chomsky Hierarchy I seem to fall [Fernando Pereira]

- Pure Inference and Pure Learning vs. Interleaved inference and learning
- Knowledge Engineering vs. Model Learning
- Human-aware vs. Stand-Alone

FOPC

Sit. Calc.

First-order

FOPC

w.o. functions

relational

STRIS Planning

propositional/

(factored)

CSP

Prop logic

Bayes Nets

Decision

trees

atomic

State-space

search

MDPs

Min-max

Semester time

The plot shows the various topics we discussed this semester, and the representational level at which we discussed them. At the minimum

we need to understand every task at the atomic representation level. Once we figure out how to do something at atomic level, we

always strive to do it at higher (propositional, relational, first-order) levels for efficiency and compactness.

During the course we may not discuss certain tasks at higher representation levels either because of lack of time, or because there simply

doesn’t yet exist undergraduate level understanding of that topic at higher levels of representation..