1 / 68

P l a nn i ng (A I M A Ch. 10 )

P l a nn i ng (A I M A Ch. 10 ). P l a nn i n g p r ob l em d ef i n ed S i mp l e p l a nn i n g a g e n t T y p es o f p l a n s G r a p h p l a nn i n g. Wh a t we h a ve s o f a r. C a n TELL KB abo ut n e w p e r c e p ts abo ut the w or l d

Download Presentation

P l a nn i ng (A I M A Ch. 10 )

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Planning(AIMACh.10) • Planningproblemdefined • Simpleplanningagent • Typesofplans • Graphplanning

  2. Whatwehavesofar • CanTELLKB aboutnewperceptsabouttheworld • KBmaintainsmodelofthecurrentworldstate • CanASKKB aboutanyfactthatcanbe inferredfromKB Howcanweusethesecomponentstobuildaplanningagent? i.e.,anagentthatconstructsplansthatcanachieveitsgoals, andthatthenexecutestheseplans?

  3. PlanningProblem • Findasequenceofactionsthatachievesa givengoalwhenexecutedfroma giveninitial worldstate. Thatis,given • asetof operatordescriptions(definingthepossibleprimitiveactionsby theagent), • aninitial statedescription,and • agoalstatedescriptionorpredicate, • computea plan,whichis • asequenceofoperatorinstances,suchthatexecutingthemin theinitial statewill changetheworldtoastatesatisfyingthe goal-statedescription. • Goalsare usuallyspecifiedasa conjunctionof subgoalstobeachieved

  4. Planningvs.ProblemSolving • Planning andproblemsolvingmethodscan oftensolvethesamesortsofproblems • Planning ismorepowerfulbecauseoftherepresentationsand methodsused • States,goals,andactionsaredecomposedintosetsof sentences(usuallyinfirst-orderlogic) • Searchoftenproceedsthroughplan spacerather than statespace(thoughtherearealsostate-space planners) • Subgoalscanbeplannedindependently,reducing thecomplexityoftheplanningproblem

  5. Remember:Problem-SolvingAgent functionSIMPLE-PROBLEM-SOLVING-AGENT( percept)returnsanaction static: seq,anactionsequence,initially empty state,somedescriptionofthecurrentworldstate goal,agoal,initially null problem,aproblemformulation statef..UPDATE-STATE(state,percept) ifseqisempty then goalf..FORMULATE-GOAL(state) //Whatisthe currentstate? //FromLA toSanDiego (givencurr.state) problemf..FORMULATE-PROBLEM(state,goal)seqf..SEARCH(problem) actionf..RECOMMENDATION(seq,state) //e.g., Gasusage seq f..REMAINDER(seq,state) returnaction //Iffailsto reachgoal,update Note: This isofflineproblem-solving. Onlineproblem-solvinginvolves actingw/ocompleteknowledgeofthe problemandenvironment

  6. Simpleplanningagent • Useperceptstobuildmodelofcurrent world state • IDEAL-PLANNER:Givena goal, algorithm generates planofaction • STATE-DESCRIPTION:givenpercept,return initialstatedescriptioninformatrequiredby planner • MAKE-GOAL-QUERY:usedtoaskKB whatnext goalshould be

  7. ASimplePlanningAgent functionSIMPLE-PLANNING-AGENT(percept)returnsanaction static: KB,a knowledgebase(includesactiondescriptions) p,a plan(initially,NoPlan) t,a timecounter(initially0) localvariables:G,a goal current,a currentstatedescription TELL(KB,MAKE-PERCEPT-SENTENCE(percept,t)) currentSTATE-DESCRIPTION(KB,t) ifp = NoPlanthen G ASK(KB,MAKE-GOAL-QUERY(t)) p IDEAL-PLANNER(current,G,KB) ifp = NoPlanorp isemptythen actionNoOp else actionFIRST(p) p REST(p) TELL(KB,MAKE-ACTION-SENTENCE(action,t)) tt+1 returnaction Likepoppingfroma stack

  8. Goal ofPlanning • Chooseactionstoachieveacertaingoal • But isn’tit exactlythe samegoalasforproblemsolving? • Some difficultieswithproblemsolving: • Thesuccessorfunctionisablack box:itmustbe “applied” toastateto knowwhichactionsare possibleinthatstateandwhataretheeffectsof each one

  9. Goal ofPlanning • Chooseactionstoachieveacertaingoal • B•Supposethatthegoalis HAVE(milk). ut isn’tit exactlythe samegoalasforoblemsolving? ome difficultieswithproblemsolving: • From some initialstatewhereHAVE(milk)is notsatisfied, prthesuccessorfunctionmustbe repeatedly appliedto • SeventuallygenerateastatewhereHAVE(milk)issatisfied. • Anexplicitrepresentationofthepossibleactionsand • theireffectswouldhelp theproblemsolverselectthe relevantactions. • possibleinthatstateandwhataretheeffectsof eac h one Otherwise,intherealworldan agentwould beoverwhelmedbyirrelevant actions

  10. Goal ofPlanning • Chooseactionstoachieveacertaingoal • But isn’tit exactlythe samegoalasforproblemsolving? • Some difficultieswithproblemsolving: • Thegoaltestisanotherblack-boxfunction, statesaredomain-specificdatastructures,and heuristicsmustbesuppliedforeachnewproblem

  11. Goal ofPlanning • Chooseactionstoachieveacertaingoal • But isn’tit exactlythe samegoalasforproblemsolving? • Some difficultieswithproblemsolving: • Thegoaltestisanotherblack-boxfunction, statesaredomain-specificdatastructures,and heuristicsmustbesuppliedforeachnewproblem • Suppose that the goal is HAVE(milk) ^ HAVE(book). • Without an explicit representation of the goal, the problem solver cannot know that a state where HAVE(milk) is already achieved is more promising than a state where neither HAVE(milk) nor HAVE(book) is achieved.

  12. Goal ofPlanning • Chooseactionstoachieveacertaingoal • But isn’tit exactlythe samegoalasforproblemsolving? • Some difficultieswithproblemsolving: • Thegoalmayconsistofseveralnearly independentsubgoals,butthereisnoway forthe problemsolvertoknowit

  13. Goal ofPlanning • Chooseactionstoachieveacertaingoal • But isn’tit exactlythe samegoalasfor HAVE(milk)and HAVE(book)may be achievedbytwonearlyindependent sequencesof actions • pro • So blemsolving? me difficultieswithproblemsolvin g: • Thegoalmayconsistofseveralnearly independentsubgoals,butthereisnoway forthe problemsolvertoknowit

  14. RepresentationsinPlanning • Planningopensuptheblackboxesbyusing • logictorepresent: • Actions • States • Goals Problemsolving Logicrepresentation Planning

  15. How to represent Planning Problem • Classical planning: we will start with problems that are full observable, deterministic, statics environment with single agents

  16. PDDL PDDL = Planning Domain Definition Language ← standard encoding language for “classical” planning tasks Components of a PDDL planning task: • Objects: Things in the world that interest us. • Predicates: Properties of objects that we are interested in; can be true or false. • Initial state: The state of the world that we start in. • Goal specification: Things that we want to be true. • Actions/Operators: Ways of changing the state of the world. PDDL can describe what we need for search algorithm: 1) initial state 2) actions that are available in a state 3) result of applying an action 4) and goal test

  17. PDDL operators States: Represented as a conjunction of fluent (ground, functionless atoms). Following are not allowed: At(x, y) (non-ground), ¬Poor (negation), and At(Father(Fred), Sydney) (use function symbol) • Use closed-world assumption: Any fluent that are not mentioned are false. • Use unique names assumption: States named differently are distinct. *Fluents: Predicates and functions whose values vary from by time (situation). *Ground term: a term with no variables. *Literal: atomic sentence. Initial state is conjunction of ground atoms. Goal is also a conjunction (∧) of literal (positive or negative) that may contain variables.

  18. PDDL operators cont. • Actions: described by a set of action schemas that implicitly define the ACTIONS(s) and RESULT(s, a). • Action schema: composed of 1) action name, 2) list of all the variable used in the schema, 3)a precondition, and 4) an effect • Action(Fly(p, from, to), PRECOND: At(p, from) ∧ Plane(p) ∧ Airport(from) ∧ Airport(to) EFFECT: ¬At(p, from) ∧ At(p, to) • Value substituted for the variables: Action(Fly(P1, SFO, JFK), PRECOND: At(P1, SFO) ∧ Plane(P1) ∧ Airport(SFO) ∧ Airport(JFK) EFFECT: ¬At(P1, SFO) ∧ At(p, JFK) Action a is applicable in state s if the preconditions are satisfied by s. *preconditions: Condition which makes an action possible.

  19. PDDL operators cont. • Result of executing action a in state s is defined as a state s′ which is represented by the set of fluents formed by starting with s, removing the fluentswhat appear as negative literals in the action’s effects. • For example: in Fly(P1, SFO, JFK) we remove At(P1, SFO) and add At(P1, JFK). a ∈ Actions(s) ⇔ s |= Precond(a) Result(s, a) = (s − Del(a)) ∪ Add(a) where Del(a) is the list of literals which appear negatively in the effect of a, and Add(a) is the list of positive literals in the effect of a. • * The precondition always refers to time t and the effect to time t + 1.

  20. PDDL: Cargo transportation planning problem

  21. Majorapproaches • Planninggraphs • Statespaceplanning • Situationcalculus • Partialorderplanning • Hierarchicaldecomposition(HTNplanning) • Reactive planning

  22. Basicrepresentationsfor planning • ClassicapproachfirstusedinSTRIPSplannercirca1970 • States representedasaconjunctionof groundliterals • at(Home) • Goals areconjunctionsofliterals,butmay have existentiallyquantifiedvariables • at(?x) ^have(Milk)^ have(bananas)... • Donot needtofully specifystate • Non-specifiedeitherdon’t-careorassumedfalse • Representmanycasesin smallstorage • Often onlyrepresentchangesin stateratherthan entiresituation • Unliketheoremprover,notseekingwhetherthegoalistrue, butis thereasequenceof actionstoattainit

  23. PlanningProblems Sparse encoding,but completestate spec On(C, A) On(A,Table) On(B,Table) Clear(C) Clear(B) A C A B StartState B C Goal Setofgoalstates, onlyrequirements specified(think unaryconstraints) On(B,C) On(A,B) Actionschema, instantiatestogive specificground actions Whichgoal first? ACTION: MoveToTable(b,x) PRECONDITIONS:On(b,x),Clear(b),Block(b),Block(x),(bx) POSTCONDITIONS: On(b,Table),Clear(x) On(b,x)

  24. Operator/actionrepresentation • Operatorscontainthreecomponents: • Action description • Precondition-conjunctionof positive literals • Effect-conjunctionof positive ornegativeliteralswhich describehowsituation changeswhenoperator is applied • Example: • Op[Action: Go(there), • Precond: At(here) ^Path(here,there), Effect: At(there) ^ ~At(here)] • Allvariables areuniversally quantified • Situationvariables areimplicit At(here) Path(here,there) Go(there) At(there) ~At(here) • preconditionsmust betrue inthestateimmediatelybefore operator isapplied; effectsaretrueimmediatelyafter

  25. TypesofPlans C A B StartState Sequential Plan MoveToTable(C,A)>Move(B,Table,C)> Move(A,Table,B) On(C, A) On(A,Table) On(B,Table) Clear(C) Clear(B) Block(A) … Partial-OrderPlan MoveToTable(C,A) > Move(A,Table,B) Move(B,Table,C)

  26. ForwardSearch Forward (progression) state-space search: ♦ Search through the space of states, starting in the initial state and using the problem’s actions to search forward for a member of the set of goal states. ♦ Prone to explore irrelevant actions ♦ Planning problems often have large state space. ♦ The state-space is concrete, i.e. there is no unassigned variables in the states.

  27. ForwardSearch C A B StartState C A B On(C, A) On(A, Table) On(B, Table) Clear(C) Clear(B) Block(A) MoveToBlock(C,A,B) Applicableactions

  28. BackwardSearch • Backward (regression) relevant-state search: ♦ Search through set of relevant states, staring at the set of states representing the goal and using the inverse of the actions to search backward for the initial state. ♦ Only considers actions that are relevant to the goal (or current state). ♦ Works only when we know how to regress from a state description to the predecessor state description. ♦ Need to deal with partially instantiated actions and state, not just ground ones. (Some variable may not be assigned values.)

  29. BackwardSearch ACTION: MoveToBlock(b,x,y) PRECONDITIONS:On(b,x),Clear(b),Clear(y), Block(b),Block(y), (bx), (by), (xy) POSTCONDITIONS:On(b,y), Clear(x) On(b,x),Clear(y) A B B MoveToBlock(A,Table,B) MoveToBlock(A,x’,B) ? C A C Goal State On(B,C) On(A,B) Relevant actions

  30. Planning“Tree” Start: HaveCake Have=T, Ate=F Goal: AteCake,HaveCake {Eat} {} Action: Eat Pre:HaveCake Add:AteCake Have=T ,Ate=F Have=F ,Ate=T Del:HaveCake {} {} {Bake} {Eat} Have=T, Ate=T Have=F, Ate=T Have=F, Ate=T Have=T, Ate=F Action: Bake Pre:HaveCake Add:HaveCake

  31. ReachableStateSets Have=T, Ate=F Have=T, Ate=F {Eat} {} Have=F, Ate=T Have=T, Ate=F Have=T Ate=F Have=F, Ate=T {} {} {Bake} {Eat} Have=T, Ate=T Have=F, Ate=T Have=T, Ate=F Have=T, Ate=T Have=F, Ate=T Have=F, Ate=T Have=T, Ate=F

  32. ApproximateReachableSets Have=T, Ate=F Have={T}, Ate={F} Have=F, Ate=T (Have,Ate) not(T,T) (Have,Ate) not(F,F) Have=T, Ate=F Have={T,F}, Ate={T,F} (Have,Ate) not(F,F) Have=T, Ate=T Have=F, Ate=T Have=T, Ate=F Have={T,F}, Ate={T,F}

  33. PlanningGraphs Start: HaveCake Goal: AteCake,HaveCake HaveCake HaveCake Action: Eat Pre:HaveCake Add:AteCake Del:HaveCake HaveCake Eat AteCake Action: Bake Pre:HaveCake Add:HaveCake AteCake AteCake S0 A0 S1

  34. MutualExclusion(Mutex) NEGATION Literalsand their negations can’tbe true atthe HaveCake HaveCake HaveCake Eat sametime AteCake P AteCake AteCake P S0 A0 S1

  35. MutualExclusion(Mutex) INCONSISTENT EFFECTS Aneffectof one negates the effectof the other HaveCake HaveCake HaveCake Eat AteCake AteCake AteCake S0 A0 S1

  36. MutualExclusion(Mutex) INCONSISTENT SUPPORT All pairsof actionsthat achieve twoliteralsare mutex HaveCake HaveCake HaveCake Eat AteCake AteCake AteCake S0 A0 S1

  37. PlanningGraph Bake HaveCake HaveCake HaveCake HaveCake HaveCake Eat Eat AteCake AteCake AteCake AteCake AteCake S0 A0 S1 A1 S2

  38. MutualExclusion(Mutex) COMPETITION Preconditions aremutex; cannotboth hold INCONSISTENTEFFECTS Aneffectof onenegates the effectof theother

  39. MutualExclusion(Mutex) INTERFERENCE Onedeletesa preconditionof the other

  40. PlanningGraph

  41. Observation1

  42. Observation2 Actionsmonotonicallyincrease (iftheyappliedbefore,theystilldo)

  43. Observation3 p p p q q q A r r r … … … Propositionmutex relationshipsmonotonicallydecrease

  44. Observation4 A A A p p p p q q q q B B B r r r … C C C s s s … … … Actionmutex relationshipsmonotonicallydecrease

  45. Observation5 • Claim:planninggraph“levelsoff” • Aftersometimek alllevels are identical • Becauseit’safinitespace,theset of literalscannot increaseindefinitely,norcanthemutexesdecrease indefinitely • Claim:ifgoalliteralneverappears,orgoalliteralsnever becomenon-mutex,noplanexists • Ifa planexisted,itwould eventuallyachieveallgoalliterals (andremovegoalmutexes– lessobvious) • Conversenottrue:goalliteralsallappearingnon-mutex doesnotimplyaplanexists

  46. Heuristics:IgnorePreconditions • Relax problemby ignoringpreconditions • Candrop allorjustsomepreconditions • Cansolve inclosedformorwithset-covermethods

  47. Heuristics:No-Delete • Relax problembynot deletingfalsified literals • Can’tundoprogress,sosolvewithhill-climbing (non-admissible) On(C, A) On(A,Table) On(B,Table) C A B Clear(C) Clear(B) ACTION: MoveToBlock(b,x,y) PRECONDITIONS:On(b,x),Clear(b),Clear(y), Block(b),Block(y),(bx),(by),(xy) POSTCONDITIONS: On(b,y),Clear(x) On(b,x),Clear(y)

  48. Heuristics:IndependentGoals • Independentsubgoals? A • Partition goal literals • Findplans for each subset • cost(all)<cost(any)? • cost(all)<sum-cost(each)? B C Goal State On(B,C) On(A,B) On(A,B) On(B,C)

  49. Heuristics:LevelCosts • Planning graphsenablepowerfulheuristics • LevelcostofaliteralisthesmallestSinwhichitappears • Max-level:goalcannotbe realized before largestgoal conjunctlevelcost(admissible) • Sum-level:ifsubgoalsare independent,goalcannotbe realizedfasterthanthesumof goalconjunctlevelcosts (notadmissible) • Set-level:goalcannotberealizedbeforeallconjunctsare non-mutex(admissible) • Bake • HaveCake HaveCake HaveCakeEat HaveCake Eat HaveCake AteCake AteCake AteCake AteCake AteCake S0 A0 S1 A1 S2

  50. Graphplan • Graphplan directlyextractsplansfromaplanning graph • Graphplan searchesforlayeredplans(often called parallel plans) • Moregeneralthantotally-ordered plans,lessgeneralthan partially-ordered plans • Alayeredplanisasequenceof setsofactions • actionsinthesamesetmustbecompatible • allsequentialorderingsofcompatibleactionsgivessame result B A C ? D B D A C Layered Plan:(a two layerplan) move(B,TABLE,A) move(D,TABLE,C) move(A,B,TABLE) move(C,D,TABLE) ;

More Related