1 / 42

Lisp recitation after class in the same room

Lisp recitation after class in the same room. 1/12. Time for Buyer’s Remorse?. Final class tally: Total 46 (Room Capacity) CSE 471: 28 [72%; 1 soph ; 3 junior; 24 senior ] CSE 598: 18 [28%; 1 PhD, 13 MS, 4 MCS ].

olin
Download Presentation

Lisp recitation after class in the same room

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lisp recitation after class in the same room 1/12

  2. Time for Buyer’s Remorse? • Final class tally: • Total 46 (Room Capacity) • CSE 471: 28 [72%; 1 soph; 3 junior; 24 senior] • CSE 598: 18 [28%; 1 PhD, 13 MS, 4 MCS ] • This is one of the most exciting courses in thedepartment! Unlike other courses that form the basis of a field of study, this course is (sort of) at the top of the food chain, so the concepts that we learnt in various other fields are applied here to solve practical problems and create systems that are truly useful. • --Unedited comment of a student from Spring 2009 class

  3. Representation Mechanisms: Logic (propositional; first order) Probabilistic logic Learning the models Search Blind, Informed Planning Inference Logical resolution Bayesian inference How the course topics stack up…

  4. Agent Classification in Terms of State Representations

  5. Illustration with Vacuum World Propositional/Factored: States made up of 3 state variables Dirt-in-left-room T/F Dirt-in-right-room T/F Roomba-in-room L/R Each state is an assignment of Values to state variables 23 Different states Actions can just mention the variables they affect Note that the representation is compact (logarithmic in the size of the state space) If you add a second roomba, the Representation increases by just one More state variable. If you want to consider “noisiness” of rooms, we need two variables, one for Each room Atomic: S1, S2…. S8 Each state is seen as an indivisible snapshot All Actions are SXS matrices.. If you add a second roomba the state space doubles If you want to consider noisiness Of the rooms, the representation Quadruples.. Relational: World made of objects: Roomba; L-room, R-room, dirt Relations: In (<robot>, <room>); dirty(<room>) If you add a second roomba, or more rooms, only the objects increase. If you want to consider noisiness, you just need to add one other relation

  6. Atomic or

  7. Simple goal: Both rooms should be clean.

  8. What happens when the domain Is inaccessible?

  9. Notice that actions can sometimes Reduce state- uncertainty Space of belief-states is exponentially larger than space of states. If you throw in likelihood of states in a belief state the resulting state-space is infinite! Sensing reduces State Uncertainty Search in Multi-state (inaccessible) version Set of states is Called a “Belief State” So we are searching in the space of belief states

  10. Will we really need to handle multiple-state problems? • Can’t we just buy better cameras? so our agents can always tell what state they are in? • It is not just a question of having good pair or eyes.. Otherwise, why do malls have the maps of the malls with “here you are” annotation in the map? • The problem of localizing yourself in a map is a non-trivial one..

  11. If we can solve problems without sensors, then why have sensing?

  12. A healthy (and alive) person accidentally walked into Springfield nuclear plant and got irradiated which may or may not have given her a disease D. The medication M will cure her of D if she has it; otherwise, it will kill her There is a test T which when done on patients with disese D, turns their tongues red R You can observe with Look sensors to see if the tongue is pink or not We want to cure the patient without killing her.. Non-det Actions Are normal Edges in Belief-space (but hyper Edges in The original State space Medicate without killing.. (A) Radiate Medicate (~D,A) (~D,~A) (D,A) (~D,A) test Sensing “partitions” belief state (D,A,R) (~D,A,~R) Is Tongue Red? y n (D,A,R) (~D,A,~R) Medicate (~D,A,R)

  13. Unknown State Space • When you buy Roomba does it have the “layout” of your home? • Fat chance! For 200$, they aren’t going to customize it to everyone’s place! • When map is not given, the robot needs to both “learn” the map, and “achieve the goal” • Integrates search/planning and learning • Exploration/Exploitation tradeoff • Should you bother learning more of the map when you already found a way of satisfying the goal? • (At the end of elementary school, should you go ahead and “exploit” the 5 years of knowledge you gained by taking up a job; or explore a bit more by doing high-school, college, grad-school, post-doc…?) Most relevant sub-area: Reinforcement learning

  14. 1/17 Project 0 Due on Thursday… Makeup class on Friday—TIME?… Tuesday’s class time will be optional recitation for project

  15. Utility of eyes (sensors) is reflected in the size of the effective search space! In general, a subgraph rather than a tree (loops may be needed consider closing a faulty door ) Given a state space of size n (or 2v where v is the # state variables) the single-state problem searches for a path in the graph of size n (2v) the multiple-state problem searches for a path in a graph of size 2n (22^v) the contingency problem searches for a sub-graph in a graph of size 2n (22^v) 2n is the EVIL that every CS student’s nightmares should be made of

  16. The important difference from the graph-search scenario you learned in CSE 310 is that you want to keep the graph implicit rather than explicit (i.e., generate only that part of the graph that is absolutely needed to get the optimal path)  VERY important since for most problems, the graphs are ginormous tending to infinite..

  17. Example: Robotic Path-Planning • States: Free space regions • Operators: Movement to neighboring regions • Goal test: Reaching the goal region • Path cost: Number of movements (distance traveled) I hD G

  18. ?? General Search

  19. Search algorithms differ based on the specific queuing function they use All search algorithms must do goal-test only when the node is picked up for expansion We typically analyze properties of search algorithms on uniform trees --with uniform branching factor b and goal depth d (tree itself may go to depth dt)

  20. Evaluating For the tree below, b=3 d=4

  21. Breadth first search on a uniform tree of b=10 Assume 1000nodes expanded/sec 100bytes/node

  22. Qn: Is there a way of getting linear memory search that is complete and optimal?

  23. The search is “complete” now (since there is finite space to be explored). But still inoptimal.

  24. IDDFS: Review

  25. BFS: A,B,G A,B,C,D,G (A), (A, B, G) DFS: IDDFS: Note that IDDFS can do fewer Expansions than DFS on a graph Shaped search space. A B C D G

  26. BFS: A,B,G A,B,A,B,A,B,A,B,A,B (A), (A, B, G) DFS: IDDFS: Note that IDDFS can do fewer Expansions than DFS on a graph Shaped search space. A B C D G Search on undirected graphs or directed graphs with cycles… Cycles galore…

  27. Graph (instead of tree) Search: Handling repeated nodes Main points: --repeated expansions is a bigger issue for DFS than for BFS or IDDFS --Trying to remember all previously expanded nodes and comparing the new nodes with them is infeasible --Space becomes exponential --duplicate checking can also be exponential --Partial reduction in repeated expansion can be done by --Checking to see if any children of a node n have the same state as the parent of n -- Checking to see if any children of a node n have the same state as any ancestor of n (at most d ancestors for n—where d is the depth of n)

More Related