1 / 45

An Introduction to Artificial Intelligence CE 40417

An Introduction to Artificial Intelligence CE 40417. Chapter 11 – Planning Ramin Halavati (halavati@ce.sharif.edu). In which we see how an agent can take advantage of a problem to construct complex plans of actions. What is planning. We have some Operators . We have a Current state.

ruth-haley
Download Presentation

An Introduction to Artificial Intelligence CE 40417

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An Introduction to Artificial IntelligenceCE 40417 Chapter 11 – Planning Ramin Halavati (halavati@ce.sharif.edu) In which we see how an agent can take advantage of a problem to construct complex plans of actions.

  2. What is planning • We have some Operators. • We have a Current state. • We have a Goal State. • We want to know: How to arrange the operators to reach the Goal State from Current State.

  3. Air Cargo Transfer Example • What’s in Domain: • We have a set of airports as SFO,JFK, ... • We have a set of cargos as C1, C2, … • We have some airplanes as P1, P2, … • State: • Plans and Cargos are at specific airports and we want to change the positions. • Actions: • Load (Cargo, Plane, Airport) • Fly (Plane, Airport1, Airport2) • Unload (Cargo, Plane, Airport)

  4. A B C C A B Blocks World Example • Domain Objects: • A set of blocks and a table. • States: • Blocks are stacked on each other and the table and we want to change their positions. • Actions: • PickUp( Block ), • PutDown( Block) • Unstack( Block, Block) • Stack( Block, Block )

  5. Domain Definition Example 1 AIR CARGO TRANPORT DOMAIN: • Objects: • SFO , JFK, C1 , C2 , P1 , P2. • Predicates: • At( C1, SFO ) • In( C1, P2 ) • Plane( P1) • Cargo(C1) • Actions: • …

  6. Domain Definition Ex.1(cont.) • Actions: • Name, • Parameters, • Preconditions, • Effects. • LOAD( c, p , a ) • Prec.: At(c,a), At(p,a), Cargo(c),Plane(p),Airport(a). • Effects: ~At(c,a), In(c,p)

  7. Domain Definition Ex.1 (cont.) • Actions: • UNLOAD( c, p , a ) • Prec.: In(c,p), At(p,a), Cargo(c),Plane(p),Airport(a). • Effects: At(c,a), ~In(c,p) • FLY( p , a1, a2 ) • Prec.: At(p,a1) Plane(p),Airport(a1) ,Airport(a2). • Effects: ~At(p,a1), At(p,a2)

  8. Domain Definition Example 2 BLOCKS WORLD DOMAIN • Objects: • A, B, C, … (the blocks) & the ROBOT. • Predicates: • On( x , y ). • OnTable( x ). • Holding( x ). • HandEmpty. • Clear( x ). • OnTable( x ). • Block( x ).

  9. X Y Domain Definition Ex.2(cont.) • Actions: • UnStack( x , y ): • Prec: On(x,y), HandEmpty, Clear( x ). • Effects: ~On(x,y), Holding(x), ~Clear(x), Clear(y). • Stack( x,y ): • Prec: Holding(x), Clear(y) • Effects: On(x,y), HandEmpty, ~Holding(x), Clear( x ), ~Clear(y). NOTE: Nothing about what is block and what’s not.

  10. Domain Definition Ex.2(cont.) • Actions: • PickUp( x ): • Prec: HandEmpty, Clear( x ), OnTable( x ). • Effects: Holding(x), ~Clear(x), ~OnTable( x ). • PutDown( x ): • Prec: Holding(x). • Effects: OnTable( x ), HandEmpty, Clear( x ).

  11. A B C C A B Propblem Definition Ex.2 • PROBLEM DEFINITION: • Initial State: On(C,A), Clear(C), ~Clear(A), OnTable(A), Clear(B), OnTable(B), HandEmpty. • Goal State: HandEmpty, Clear(A), On(A,B), On(B,C), OnTable(C).

  12. Simplest Approach • It’s all about SEARCH. • States: As described before. • Next State Generator: Which actions are applicable, apply every one of em. • Path Cost: One for each action. • Goal Test: Has goal state reached?

  13. C A B C A B C Initialstate B A B A C A B C B B C A B A B C C A C A A A Goal B C B C B C A A C B

  14. C C B A B A C C C A A A B B B C B C A B A B C A Simplest Approach • Progression (Forward Search): • Start from Initial State, move forward till you reach goal state. . . . NOTE: Backtracking is mandatory.

  15. Simplest Approach • Regression (Backward Search): • Put Goal State’s predicates in Agenda. • Recursively fetch an item from agenda • Find something to satisfy it. • Remove all its effects from agenda. • Add all its preconditions to agenda.

  16. Regression Example On(A,B), On(B,C), OnTable(C) 1. Pick Goal: On(A,B) 2. Choose Action: Stack(A,B) 3. Add actions preconditions to agenda and remove its effects from it. On(B,C), OnTable(C), Holding(A), Clear(B). 1. Pick Goal: Holding(A) 2. Choose Action: PickUp(A). 3. ... On(B,C), OnTable(C), Clear(B), HandEmpty, OnTable(A) ...

  17. Pure Search Approaches • Heuristics: • Using Relaxed Domain Definition: • Assume actions have no precondition. • Assume actions have no negative effects. • … • (They are all admissible). • Sub-Goal Independence Assumption • Assuming each goal can be achieved with a sub-plan, regardless of other necessities. • (Not necessarily admissible, depends on the domain).

  18. Simplest Approach • What’s wrong with Search? • Branching Factor may be too big. • The search space is reversible, resulting in infinite loops and repeated states. • Simple Search is the least that we can do.

  19. Partial Order Planning PARTIAL ORDER PLANNING

  20. Partial Order Planning • We do not need to start from the beginning of plan and march to end. • Some steps, facts, etc. are more important, we can decide on them ahead. • We can impose least possible commitments during the task.

  21. PickUp(B) STACK(A,B) STACK(B,C) OnTable(B) H.E. Holding(A) Clear(B) Holding(B) Clear(C) On(B,C) H.E. Clear(B) Holding(B) ~Clear(B) ~OnTable(B) On(A,B) H.E. Clear(A) A B C C A B START: On(C,A) OnTable(A) OnTable(B) Clear(C) Clear(A) END : On(A,B) On(B,C) OnTable(C) Note: Not all results of each action is mentioned.

  22. Ordering

  23. Partial Order Planning • Assume an action called START. • No precondition. • All ‘Initial State’ as its effects. • Assume an action called END. • All ‘Goal State’ as its precondition. • No Effect.

  24. Partial Order Planning • Partial Plan is a (A,O,L,Agenda) where: • A: set of actions in the plan • Initially {Start, End} • O: temporal orderings between actions • Initially {Start<End} • Agenda: open preconditions that need to be satisfied along with the action which needs them. • Initially all preconditions of End such as {(BeHome,End),(HaveMoney,End)}.

  25. Clear(B) Unstack(C,B) Putdown(A,B) Q A1 A2 Partial Order Planning • L: The set of Causal Links • Initially Empty. • Causal Link: Action A2 has precondition Q that is established in the plan by action A1.

  26. STACK(B,C) Holding(B) Clear(C) On(B,C) H.E. Clear(B) Partial Order Planning • Example: • A ={Start, Stack(B,C), End} • O ={Start<End, Stack(B,C)<End} • L ={(Stack(B,C),On(B,C),End)} • Agenda={(On(A,B),End), (OnTable(C),End), (Holding(B),Stack(B,C), (Clear(C),Stack(B,C)}. START: On(C,A) OnTable(A) OnTable(B) Clear(C) Clear(A) END: On(A,B) On(B,C) OnTable(C)

  27. Partial Order Planning • A causal link (A1, Q, A2) represents the assertion that the role of A1 is to establish proposition Q for A2. This tells future search steps to “protect” Q in the interval between A1 and A2. • Action B threatens causal link (A1, Q, A2) if: 1. B has Q as a delete effect, and 2. B could come between A1 and A2, i.e. O  (A1 < B < A2) is consistent. • For example: PutDown(C,B) is a threat for: Clear(B) Unstack(C,B) PutDown(A,B)

  28. Finally, POP’s Code. POP(<A,O,L>, agenda) 1. Termination: If agenda is empty return <A,O,L> 2. Goal selection: Let <Q,Aneed> be a pair on the agenda 3. Action selection: Let Aadd = choose an action that adds Qif no such action exists, then return failure LetL’= L  {Aadd→ Aneed}, and letO’= O  {Aadd< Aneed}. IfAadd is newly instantiated, thenA’ = A{Aadd} and O’= O  {A0< Aadd< A} (otherwise, let A’ = A) Q 4. Updating of goal set:Letagenda’ = agenda -{<Q,Aneed>}. IfAadd is newly instantiated, then for each conjunction, Qi, of its precondition, add <Qi,Aadd> to agenda’ • 5. Causal link protection: For every action At that might threaten a causal link Ap→ Ac, add a consistentordering constraint, either • Demotion: Add At< Ap to O’ • Promotion: Add Ac< At to O’ • Inequality constraints • If neither constraint is consistent, then return failure p 6. Recursive invocation: POP((<A’,O’,L’>, agenda’)

  29. Last POP Notes. • Using Variables: • You need not add UnStack(A,B) when you need Clear(B). Just add Unstack(x,B) and add binding as a next step. • Heuristics: • What to do in ChooseGoal and ChooseAction?

  30. … … Planning Graph • Main Idea: • To construct a graph of possible outcomes.

  31. Dinner Date Domain • Initial Conditions: (and (garbage) (cleanHands) (quiet)) • Goal: (and (dinner) (present) (not (garbage)) • Actions: • Cook :precondition (cleanHands) :effect (dinner) • Wrap :precondition (quiet) :effect (present) • Carry :precondition :effect (and (not (garbage)) (not (cleanHands)) • Dolly :precondition :effect (and (not (garbage)) (not (quiet)))

  32. Dinner Date Graph

  33. Mutual Exclusion Classes Interference (Prec-Effect) Inconsistent Effects Inconsistent Support Competing Needs

  34. Observation 1 p ¬q ¬r p q ¬q ¬r p q ¬q r ¬r p q ¬q r ¬r A A A B B Propositions monotonically increase (always carried forward by no-ops)

  35. Observation 2 p ¬q ¬r p q ¬q ¬r p q ¬q r ¬r p q ¬q r ¬r A A A B B Actions monotonically increase

  36. Observation 3 p q r … p q r … p q r … A Proposition Mutex relationships monotonically decrease

  37. Observation 4 A A A p q r s … p q r s … p q r s … p q … B B B C C C Action mutex relationships monotonically decrease

  38. Observation 5 (Sum Up) Planning Graph ‘levels off’. • After some time k, all levels are identical. • Because it’s a finite space, the set of literals never decreases and mutexes don’t reappear.

  39. Graph Plan Algorithm Graph Plan Algorithm: • Grow the planning graph (PG) until all goals are reachable and not mutex. (If PG levels off first, fail) • Search the PG for a valid plan • If non found, add a level to the PG and try again

  40. Search for a solution plan

  41. Search for a solution plan • Backward chain on the planning graph • Achieve goals level by level • At level k, pick a subset of non-mutex actions to achieve current goals. Their preconditions become the goals for k-1 level. • Build goal subset by picking each goal and choosing an action to add. Use one already selected if possible. Do forward checking on remaining goals (backtrack if can’t pick non-mutex action)

  42. Just Another Planning Approach Planning By Logic (SAT-Plan): • Convert the planning problem into a logic problem. • Solve the logic problem.

  43. SAT Plan Example INITIAL STATE: At(P1,SFO)0 AT(C1,JFK)0  Plane(P1)  Cargo(C1)  Airport(SFO)  Airport(JFK). RULES: At(x,y)t FLY(x,y,z)t  Airplane(x)  Airport(y)  Airport(z) At(x,z)t+1  At(x,y)t+1 … GOAL STATE: AT(C1,SFO)x

  44. Sum Up • POP: Most human-like. • Graph Plan: Winner of planning contests. • SAT Plan: Widely used in real problem as: • Hardcode logic solvers. • Mathematics and Optimization. • Note: Combinations are also used.

  45. EXERCISES & Projects Implement either POP or Graph-Plan. As Exercise: On a hard-coded domain without variable-instantiating. – Send To: sharifian@ce.sharif.edu – Subject: AIEX-C11 As Project: Read the domain as PDDL, have variable instantiation, and all.

More Related