1 / 14

CSE 571: Artificial Intelligence

CSE 571: Artificial Intelligence. Instructor: Subbarao Kambhampati rao@asu.edu Homepage: http://rakaposhi.eas.asu.edu/cse571 Office Hours: Right after the class 3:15—4:15pm BY560. History. At ASU, CSE 471/598 has been taught as the main introductory AI course

pascha
Download Presentation

CSE 571: Artificial Intelligence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CSE 571: Artificial Intelligence Instructor: SubbaraoKambhampati rao@asu.edu Homepage: http://rakaposhi.eas.asu.edu/cse571 Office Hours: Right after the class 3:15—4:15pm BY560

  2. History • At ASU, CSE 471/598 has been taught as the main introductory AI course • Normally taught by either Rao or Huan Liu • 571 has been taught as a graduate level AI course • Didn’t necessarily require 471 • Didn’t necessarily have a breadth aspect • Nick Findler taught it for a long time and would focus on distributed AI • ChittaBaral taught it after Nick and would focus on knowledge representation • Last time Rao taught it was in 1996 • Looking back at that syllabus, it looks like 571 I taught then is a subset of 471 as I teach now

  3. CSE 571 This time? • “Run it as a Graduate Level Follow-on to CSE 471” • Broad objectives • Deeper treatment of some of the 471 topics • More emphasis on tracking current state of the art • Training for literature survey and independent projects

  4. Who are you & what do you want?

  5. Week 1: Intro; Intelligent agent design [R&N Ch 1, Ch 2] Week 2: Problem Solving Agents [R&N Ch 3 3.1--3.5] Week 3: Informed search [R&N Ch 3 3.1--3.5] Week 4: CSPs and Local Search[R&N Ch 5.1--5.3; Ch 4 4.3] Week 5: Local Search and Propositional Logic[R&N Ch 4 4.3; Ch 7.1--7.6] Week 6: Propositional Logic --> Plausible reasoning[R&N Ch 7.1--7.6; [ch 13 13.1--13.5]] Week 7: Representations for Reasoning with Uncertainty[ch 13 13.1--13.5]] Week 8: Bayes Nets: Specification & Inference[ch 13 13.1--13.5]] Week 9: Bayes Nets: Inference[ch 13 13.1--13.5]] (Here is a fully worked out example of variable elimination) Week 10: Sampling methods for Bayes net Inference; First-order logic start[ch 13.5; ] Week 11: Unification, Generalized Modus-Ponens, skolemization and resolution refutation. Week 12: Reasoning with changePlanning Week 13: Planning, MDPs & Gametree search Week 14: Learning What we did in 471

  6. Table of Contents (Full Version)      Preface (html); chapter mapPart I Artificial Intelligence     1 Introduction      2 Intelligent Agents Part II Problem Solving     3 Solving Problems by Searching      4 Informed Search and Exploration      5 Constraint Satisfaction Problems     6 Adversarial Search Part III Knowledge and Reasoning     7 Logical Agents      8 First-Order Logic 9Inference in First-Order Logic10Knowledge RepresentationPart IV Planning 11 Planning (pdf) 12 Planning and Acting in the Real World Part V Uncertain Knowledge and Reasoning    13 Uncertainty14 Probabilistic Reasoning 15 Probabilistic Reasoning Over Time    16 Making Simple Decisions17 Making Complex DecisionsPart VI Learning18 Learning from Observations19 Knowledge in Learning 20 Statistical Learning Methods21 Reinforcement LearningPart VII Communicating, Perceiving, and Acting    22 Communication     23 Probabilistic Language Processing     24 Perception     25 Robotics Part VIII Conclusions    26 Philosophical Foundations   27 AI: Present and Future Chapters Covered in 471 (Spring 09)

  7. Schindler: I could've got more...I could've got more, if I'd just...I could've got more...Stern: Oskar, there are eleven hundred people who are alive because of you. Look at them.Schindler: If I'd made more money...I threw away so much money, you have no idea. If I'd just...Stern: There will be generations because of what you did.Schindler: I didn't do enough.Stern: You did so much.Schindler: This car. Goeth would've bought this car. Why did I keep the car? Ten people, right there. Ten people, ten more people...(He rips the swastika pin from his lapel) This pin, two people. This is gold. Two more people. He would've given me two for it. At least one. He would've given me one. One more. One more person. A person, Stern. For this. I could've gotten one more person and I didn't. • Top few things I would have done if I had more time • Statistical Learning • Reinforcement Learning; Bagging/Boosting • Planning under uncertainty and incompleteness • Ideas of induced tree-width • Multi-agent X (X=search,learning..) • PERCEPTION (Speech; Language…) • Be less demanding more often (or even once…) Adieu with an Oscar Schindler Routine. Rao: I could've taught more...I could've taught more, if I'd just...I could've taught more... Yunsong: Rao, there are thirty people who are mad at you because you taught too much. Look at them.Rao: If I'd made more time...I wasted so much time, you have no idea. If I'd just...Yunsong: There will be generations (of bitter people) because of what you did.Rao: I didn't do enough.Yunsong: You did so much.Rao: This slide. We could’ve removed this slide. Why did I keep the slide? Two minutes, right there. Two minutes, two more minutes.. This music, a bit on reinforcement learning. This review. Two points on bagging and boosting. I could easily have made two for it. At least one. I could’ve gotten one more point across. One more. One more point. A point, Yunsong. For this. I could've gotten one more point across and I didn't. 

  8. Things I Know I want to Cover • Search • Local vs. Systematic • Optimization in continuous domains • Constraint networks • Tree-width concepts; temporal constraint networks • Reasoning: Planning • Temporal planning; belief-space planning, stochastic planning • POMDPs; DecPOMDPs? • KR: Templated Probabilistic Networks • Dynamic probabilistic networks • Relational Probabilistic networks • Learning: • Relational Learning • Reinforcement learning

  9. Reading Material…Eclectic • Chapters from the new edition (in preparation) of R&N (in some cases) • First reading: Advanced Search Techniques chapter (Will be distributed in hardcopy) • Chapters from other books • POMDPS from Thrun/Burgard/Fox • Templated Graphical models from Koller &Friedman • CSP/Tree-width stuff from Dechter • Tutorial papers etc

  10. “Grading”? • 3 main ways • Participate in the class actively. Read assigned chapters/papers; submit reviews before the class; take part in the discussion • Learn/Present the state of the art in a sub-area of AI • You will pick papers from IJCAI 2009 as a starting point • http://ijcai.org/papers09/contents.php • Work on a semester-long project • Can be in groups of two (or, in exceptional circumstances, 3)

  11. Deadlines.. • AAMAS deadline: 10/8/09 • KR deadline: 11/10/09 • ICAPS deadline: 12/16/09 • AAAI deadline: 1/15/10 • ICML deadline: ~2/10/10

  12. Discussion • What are the current controversies in AI? What are the hot topics in AI?

  13. Pendulum Swings in AI • Top-down vs. Bottom-up • Ground vs. Lifted representation • The longer I live the farther down the Chomsky Hierarchy I seem to fall [Fernando Pereira] • Pure Inference and Pure Learning vs. Interleaved inference and learning • Knowledge Engineering vs. Model Learning • Human-aware vs.

  14. The representational roller-coaster in CSE 471 FOPC Sit. Calc. First-order FOPC w.o. functions relational STRIS Planning propositional/ (factored) CSP Prop logic Bayes Nets Decision trees atomic State-space search MDPs Min-max Semester time  The plot shows the various topics we discussed this semester, and the representational level at which we discussed them. At the minimum we need to understand every task at the atomic representation level. Once we figure out how to do something at atomic level, we always strive to do it at higher (propositional, relational, first-order) levels for efficiency and compactness. During the course we may not discuss certain tasks at higher representation levels either because of lack of time, or because there simply doesn’t yet exist undergraduate level understanding of that topic at higher levels of representation..

More Related