1 / 57

Continuous Time and Resource Uncertainty

Continuous Time and Resource Uncertainty. CSE 574 Lecture Spring ’03 Stefan B. Sigurdsson. (Big Mars Rover Picture). Lecture Overview. Context Classical planning The Mars Rover domain Relaxing the assumptions Q: What’s so different? Innovation Discussion. (Shakey Picture).

hosea
Download Presentation

Continuous Time and Resource Uncertainty

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Continuous Time and Resource Uncertainty CSE 574 Lecture Spring ’03 Stefan B. Sigurdsson

  2. (Big Mars Rover Picture)

  3. Lecture Overview Context • Classical planning • The Mars Rover domain • Relaxing the assumptions • Q: What’s so different? Innovation Discussion

  4. (Shakey Picture) Slide shamelessly lifted from http://www.cs.nott.ac.uk/~bsl/G53DIA/Slides/Deliberative-architectures-I.pdf

  5. STRIPS-Like Planning Actions World Description Conjunctive precondition STRIPS operators Conj. effect (add/delete) Instantaneous Sequential Deterministic Propositional logic Closed world assumption Finite and static Complete knowledge Discrete time No exogenous effects Goal Description Attainment – “Win or lose” Conjunctions of positive literals  Plan…

  6. (Big Mars Rover Picture)

  7. The Mars Rover Domain Robot control, with… • Positioning and navigation • Complex choices (goals and actions) • Rich utility model • Continuous time and concurrency • Uncertain resource consumption • Metric quantities • Very high stakes! But alone in a finite, static universe

  8. Resources? Metric Quantities? What are those? Various flavors: • Exclusive (camera arm) • Shared (OS scheduling) • Metric quantity (fuel, power, disk space) Uncertainty

  9. Alright, Whatsit Really Mean?

  10. Is This Really A Planning Problem? Better suited to OR/DT-type scheduling? • Time, resources, metric quantities, concurrency, complicated goals/rewards… Complex, inter-dependent activities • Select, calibrate, use, reuse, recalibrate sensors • OR-type scheduling can’t handle rich choices Insight: Maybe we can borrow some tricks?

  11. Can Planners Scale Up? Large plans • Sequences of ~ 100 actions Where do we start? • POP? • MDP? • Graph/SATplan?

  12. Can Planners Scale Up? Large plans • Sequences of ~ 100 actions Where do we start? • POP? (Branch factors are too big) • MDP? • Graph/SATplan?

  13. Can Planners Scale Up? Large plans • Sequences of ~ 100 actions Where do we start? • POP? (Branch factors are too big) • MDP? (Complete policy is too large) • Graph/SATplan?

  14. Can Planners Scale Up? Large plans • Sequences of ~ 100 actions Where do we start? • POP? (Branch factors are too big) • MDP? (Complete policy is too large) • Graph/SATplan? (Discrete representations)

  15. Which Extensions First? Metric quantities • Time • Resources Resource Uncertainty Concurrency  What about non-determinism?  Reasonable for Graphplan?

  16. A (Very Incomplete)Research Timeline 1971 STRIPS (Fikes/Nilson) 1989 ADL (Pednault) 1991 PEDESTAL (McDermott) 1992 UCPOP (Penberthy/Weld) 1992 SENSp (Etzioni et al.) CNLP (Peot/Smith) 1993 Buridan (Kushmerick et al.) 1994 C-Buridan (Draper et al.) JIC Scheduling (Drummond et al.) HSTS (Muscettola) Zeno (Penb./Weld) Softbots (Weld/Etzioni) MDP (Williamson/Hanks) 1995 DRIPS (Haddawy et al.) IxTeT (Laborie/Ghallab) 1997 IPP (Koehler et al.) 1998 PGraphplan (Blum/Langford) Weaver (Blythe) PUCCINI (Golden) CGP (Smith/Weld) SGP (Weld et al.) Pgraphplan (Blum/Langford) 1999 Mahinur (Onder/Pollack) ILP-PLAN (Kautz/Walzer) TGP (Smith/Weld) LPSAT (Wolfman/Weld) 2000 T-MDP (Boyan/Littman) HSTS/RA (Jónsson et al.) Since then? Not implemented  ADL impl. Sensing Conformant Contingent Planning + scheduling Metric time/resources Safe planning Dec. theory goals Uncertain utility Shared resources Uncertain/dynamic Sensing Conformant Contingent Resources Resources

  17. STRIPS UCPOP CGP CNLP SENSp Buridan Weaver C-Buridan MDP PO-MDP S-MDP T-MDP F-MDP LPSAT Mars Rover Domain Assumptions Classical Expressive logic Non-determinism Observation Goal model Plan utility Durative actions Complex concurrence Continuous time Metric quantities Branching factor Resource uncertainty Resource constraints Goal selection Safe planning Exogenous events Select contingencies Serialized goals? Bleeding edge

  18. Brain-teaser: Domain Spec State space S • Cartesian product of continuous and discrete axes (time, position, achievements, energy…) Initial state si • Probability distribution Domain theory • Concurrent, non-deterministic, uncertain   What else? (S, si, , …)

  19. Brain-teaser: Kalman Filters Curiously missing from the paper we read (?) 1983 Kalman filters paper: Voyager enters Jupiter orbit through a 30 second window after 11 years in space Hugh Durrant-Whyte’s robots Why not for the Mars Rover?

  20. Context Summary Complex, exciting domain Pushes the planning envelope • Expression • Scaling  Where do we start?

  21. Lecture Overview Context Innovation • Just-in-case planning • Incremental contingency planning Discussion

  22. Just-In-Case Planning Motivated by domain characteristics • Metric quantities • Large branch factors Implications • Not plan, not policy • Expanded plan  What about concurrency?

  23. Branch Heuristics Most probable failure point (scheduling) Highest utility branch point (planning)  What is the intrinsic difference?

  24. When To Execute A Contingency?

  25. Incremental Contingency Planning Algorithm Input: Domain description and master plan Output: Highest-utility branch point Algorithm: • Compute value, estimate resources during master plan • Approximate branch point utilities • Select highest-utility branch point • Solve w/ new initial, goal conditions • Repeat while necessary

  26. Branch Utility Approximation … without constructing plan • Construct a plan graph • Back-propagate utility functions through plan graph, instead of regression searching • Compute branch point utilities throughout input plan

  27. Back-Propagating Distributions Mausam: “Some parts of the paper are tersely written, which make it a little harder to understand. I got quite confused in the discussion of utility propagation. It would have been nicer had they given some theorems about the soundness of their method.” Well, me too

  28. Back-Propagating Distributions (10, 15) A B q g (10, 15) p C r (3, 3) E g’ (2, 2) D s t (1, 5)

  29. 1 Back-Propagating Distributions 5 (10, 15) A B q g (10, 15) p C r (3, 3) E g’ (2, 2) D s t (1, 5)

  30. 1 Back-Propagating Distributions 5 15 5 (10, 15) A B q g (10, 15) p C r (3, 3) E g’ (2, 2) D s t (1, 5)

  31. 1 Back-Propagating Distributions 5 5 15 5 25 (10, 15) A B q g (10, 15) p C r (3, 3) E g’ (2, 2) D s t (1, 5)

  32. 1 1 Back-Propagating Distributions 5 5 15 5 25 (10, 15) A B q g (10, 15) p C r (3, 3) E g’ (2, 2) D s t (1, 5) 2

  33. 1 1 1 1 Back-Propagating Distributions 5 5 15 5 25 (10, 15) A B q g (10, 15) t p C 2 r (3, 3) E g’ (2, 2) D s t (1, 5) r 2 2

  34. 1 1 1 1 Back-Propagating Distributions 5 5 15 5 25 (10, 15) A B q g (10, 15) t p C 2 r (3, 3) E g’ (2, 2) D s t (1, 5) r r 2 1 5 2

  35. 1 1 1 1 Back-Propagating Distributions 5 5 15 5 25 (10, 15) A B q g (10, 15) t p C 2 t r 1 (3, 3) E 5 g’ (2, 2) D s t (1, 5) r r 2 1 5 2

  36. 1 1 1 1 Back-Propagating Distributions t 6 5 5 1 + 15 5 15 25 25 15 (10, 15) A B q g (10, 15) t p C 2 t r 1 (3, 3) E 5 g’ (2, 2) D s t (1, 5) r r 2 1 5 2

  37. 1 1 1 1 Back-Propagating Distributions t 6 5 5 1 + 15 5 15 25 25 15 (10, 15) 6 t 5 1 + A B 5 25 25 15 q g (10, 15) t p C 2 t r 1 (3, 3) E 5 g’ (2, 2) D s t (1, 5) r r 2 1 5 2

  38. 1 1 1 Back-Propagating Distributions t 6 5 5 1 + 15 5 15 25 25 15 (10, 15) 6 t 5 1 + A B g 5 25 25 15 q (10, 15) t r p 1 1 + C 2 5 t r 1 (3, 3) E 5 g’ (2, 2) D s t (1, 5) r r 2 1 5 2

  39. 1 1 1 Back-Propagating Distributions t 6 5 5 1 + 15 5 15 25 25 15 (10, 15) 6 t 5 1 + A B g 5 25 25 15 q (10, 15) t p 1 1 + C 2 5 t r 1 (3, 3) E 5 g’ (2, 2) D s t (1, 5) r r 2 1 5 2

  40. 1 1 1 Back-Propagating Distributions t 6 5 5 1 + 15 5 15 25 25 15 (10, 15) 6 t 5 1 + A B g 5 25 25 15 q (10, 15) t p 1 1 + C t 2 5 r 1 1 + (3, 3) E 5 8 g’ (2, 2) D s t (1, 5) r r 2 1 5 2

  41. 1 1 1 Back-Propagating Distributions t 6 5 5 1 + 15 5 15 15 25 25 25 15 (10, 15) 6 t 5 1 + A B g 5 25 25 15 q (10, 15) t p 1 1 + C t 2 5 r 1 1 + (3, 3) E 5 8 g’ (2, 2) D s t (1, 5) r r 2 1 5 2

  42. 1 1 1 Back-Propagating Distributions t 6 5 5 1 + 15 5 15 15 25 25 25 15 (10, 15) 6 t 5 1 + A B g 8 5 25 25 15 q (10, 15) t p 1 1 + C t 2 5 r 1 1 + (3, 3) E 5 8 g’ (2, 2) D s t (1, 5) r r 2 1 5 2

  43. 1 1 1 Back-Propagating Distributions t 6 5 5 1 + 15 5 15 15 25 25 25 15 (10, 15) 6 t 5 1 + A B g 8 5 25 25 15 q (10, 15) t p 1 1 + C t 2 5 r 1 1 + (3, 3) E 5 8 g’ (2, 2) D s t (1, 5) r r 1 2 1 8 5 2

  44. 1 1 1 Back-Propagating Distributions t 6 5 5 1 + 15 5 15 15 25 25 25 15 (10, 15) 6 t 5 1 + A B g 8 5 25 25 15 q (10, 15) t p 1 1 + C t 2 5 r 1 1 + (3, 3) E 5 8 g’ (2, 2) D s t (1, 5) r r 1 2 1 8 5 2 (CDE)

  45. 1 1 1 Back-Propagating Distributions t 6 5 5 1 + 15 5 15 15 25 25 25 15 (10, 15) 6 t 5 1 + A B g 8 5 25 25 15 q (10, 15) t p 1 1 + C t 2 5 r 1 1 + (3, 3) E 5 8 g’ (2, 2) D s t (1, 5) r r 1 2 1 8 5 2 (CDE)

  46. 1 1 1 Back-Propagating Distributions t 6 5 5 1 + 15 5 15 15 25 25 25 15 (10, 15) [(DCE) (AB) (DABE)] A B g q (10, 15) t p 1 1 + C t 2 5 r 1 1 + (3, 3) E 5 8 g’ (2, 2) D s t (1, 5) [(CDE) (ABDE)] r r 2 1 5 2

  47. 1 1 1 Back-Propagating Distributions t 6 5 5 1 + 15 5 15 15 25 25 25 15 (10, 15) 6 A B 5 1 g q 6 25 26 (10, 15) t (DCE, AB, DABE) p 1 1 + C t 2 5 r 1 1 + (3, 3) E 5 8 g’ (2, 2) D s t (1, 5) 6 r r 1 2 1 8 25 5 2 (CDE, ABDE)

  48. Utility Estimation 6 5 1 6 25 26 (DCE, AB, DABE) p s 6 1 8 25 (CDE, ABDE)

  49. Utility Estimation 6 5 1 MAX operator: 6 25 26 (DCE, AB, DABE) p 6 1 6 25 s (DCE, ABDE) 6 1 8 25 (CDE, ABDE)

  50. Utility Estimation 6 5 1 MAX operator: 6 25 26 (DCE, AB, DABE) p 6 1 6 25 s (DCE, ABDE) 6 1 8 25 (CDE, ABDE) (Then combine w/Monte Carlo results)

More Related