1 / 24

Distributed problem solving and planning Lecture outline

Multi-Agent Systems Lecture 6 University “Politehnica” of Bucarest 2003 - 2004 Adina Magda Florea adina@cs.pub.ro http://turing.cs.pub.ro/blia_2004. Distributed problem solving and planning Lecture outline. 1 Distributed problem solving 2 Distributed planning

Download Presentation

Distributed problem solving and planning Lecture outline

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multi-Agent SystemsLecture 6University “Politehnica” of Bucarest2003 - 2004Adina Magda Floreaadina@cs.pub.rohttp://turing.cs.pub.ro/blia_2004

  2. Distributed problem solving and planningLecture outline 1 Distributed problem solving 2 Distributed planning 2.1 Centralized planning for distributed plans 2.2 Distributed planning for centralized plans 2.3 Distributed planning for distributed plans 2.4 Distributed planning and execution 3 An example: Partial global planning

  3. 1. Distributed problem solving • Group coherence - agents want to work together - cooperative agents • Competence - agents must find ways to work together - coordinate to cooperate • Task and result sharing - an agent has many tasks to do and asks other agents to do some of its tasks; then it should integrate the results • Distributed planning - the problem to be solved is to design and execute a plan in a distributed manner, by many agents 3

  4. 2 Distributed planning • What can be distributed: • The process of coming out with a plan is distributed among agents • Execution is distributed among agents Planning • State representation and plan representation • Search vs planning • representation of changes to the world state • representation of and reasoning about the plan (steps/actions) • Linear planning • Partial order planning • Hierarchical planning • Conditional planning Planning  Search 4

  5. on(B,A) S1: move(B,T,A) on(B,T) clear(B) clear(A) movetotable(A,B) move(A,B,y) S2: move(A,B,E) clear(A) clear(E) on(A,B) ………….. …………. on(E,T) S3: movetotable(E,F) C Si A D B Sf B F A E D C E F 2.1 Centralized planning for distributed plans • Operators • move(b,x,y) movetotable(b,x) Precond: on(b,x) clear(b) clear(y) Precond: on(b,x)  clear(b) Postcond: on(b,y) clear(x) Postcond: on(b,T)  clear(x)  on(b,x) on(b,x)clear(y) I'm Tom Agent2 I'm Bill Agent1 on(A,B) on(C,D) on(E,F) on(B,T) on(D,T) on(F,T) on(B,A) on(F,D) on(A,E) on(D,C) on(E,T) on(C,T) 1. Given a goal description, a set of operators, and an initial state description generate a partial order plan 5

  6. S1: move(B,T,A) To satisfy the preconditions, we have: S2: move(A,B,E) S2 < S1, S3 < S4 S3:movetotable(E,F) S6 < S4, S6 < S5 S4: move(F,T,D) Also S5: move(D,T,C) S2 threat to S3  S3 < S2 S6: movetotable(C,D) S4 threat to S5  S5 < S4 Then the partial ordering is: S3 < S2 < S1 S6 < S5 < S4 S3 < S4 S3: movetotable(E,F) S2: move(A,B,E) S1: move(B,T,A) S6: movetotable(C,D) S5: move(D,T,C) S4: move(F,T,D) Any total ordering that satisfies this partial ordering is a good plan for Agent1 What if we have 2 agents? DECOMP1 Subplan1S3 < S2 < S1 Subplan2 S6 < S5 < S4 and S3 < S4 Agent1 S3 < send(clear(F)) < S2 < S1 Agent2 S6 < S5 < wait(clear(F)) < S4 < < 2. Decompose the plan into subproblems so as to minimize order relations across plans 3. Insert synchronization 4. Allocate subplans to agents 6

  7. < < S3: movetotable(E,F) S2: move(A,B,E) S1: move(B,T,A) S6: movetotable(C,D) S5: move(D,T,C) S4: move(F,T,D) DECOMP2 Subplan1S3 < S5 < S4 Subplan2 S6 < S2 < S1 and S3 < S2 and S6 < S5 Agent1 S3 < send(don't_care(E)) < wait(clear(D)) < S5 < S4 Agent2 S6 < wait(don't_care(E)) < wait(clear(D)) < S2 < S1 • Obviously, DECOMP2 has more order relations among subplans than DECOMP1 • Therefore, we choose DECOMP1 S3 < send(clear(F)) < S2 < S1 S6 < S5 < wait(clear(F)) < S4 But then back to DECOMP2 4. If failure to allocate subplans then redo decomposition (2) If failure to allocate subplans with any decomposition then redo generate plan (1) 5. Execute and monitor subplans I know how to move only D, E, F I know how to move only A, B, C 7

  8. 2.2 Distributed planning for centralized plans • Each of the planning agents generate a partial plan in parallel then merge these plans into a global plan • parallel to result sharing • may involve negotiation Agent 1 - is specialized in doing movetotable(b,x) Agent 2 - is specialized in doing move(b,x,y) Agent 1 - based on Sf it comes out with the partial plan PAgent1 = { S3: movetotable(E,F) satisfies on(E,T) S6: movetotable(C,D) satisfies on(C,T) no ordering } Agent 2 - based on Sf it comes out with the partial plan PAgent 2 = { S1: move(B,T,A), S2: move(A,B,E) satisfies on(B,A)  on(A,E) S4: move(F,T,D), S5: move(D,T,C) satisfies on(F,D)  on(D,C) ordering S2 < S1 and S5 < S4 } • Merge PAgent1 with PAgent2 by checking preconditons and threats • Establish thus order S3 < S2, S6 < S5, S3 < S4 + orderof PAgent2 • Then give any instance of this partial plan to an execution agent to carry it out 8

  9. C Si A D B Sf B F A E D C E D A F E C F B • The problem is decomposed and distributed among various planning specialists, each of which proceeds then to generate its portion of the plan • similar to task sharing • may involve backtracking Agent 1 - knows only how to deal with 2-block stacks Agent 2 - knows only how to deal with 3-block stacks 9

  10. 2.3 Distributed planning for distributed plans a) Plan merging • Agents formulate local plans to satisfy their goals • Local plans are exchanged • Local plans are combined analyzing for positive and negative interaction • Add messages and/or timing commitments to resolve negative plan interactions and to exploit positive plan interactions Interacting situations • Positive interactions between plans • redundant actions • static detection: sequencing • favour actions • dynamic detection: incorporation • Negative interactions between plans • harmful actions • exclusive actions • incompatible actions 10

  11. Si E Sf E C B D F A F A B C D C A B B Negative interactions what type? • movehigh(b,x,y) Precond: have_lifter  clear(b)  clear(y)  on(y,z)  z T Postcond: on(b,y)  clear(x)  on(b,x)   clear(y)  free_lifter • pick_lifter Precond: free_lifter Postcond: have_lifter  free_lifter Agent1: { S1:move(B,T,A) < S2: pick_lifter < S3: movehigh(E,T,B) } Agent2: { R1:move(C,T,D) < R2: pick_lifter < R3: movehigh(F,T,C) } R1 S1 need_l S2 S3 Sf1 free_l R2 R3 11

  12. Positive interactions Give examples of positive interactions • redundant • favor Problems with the approach? b) Iterative plan formation • build all feasible plans • build partial order plans to facilitate plan merging • build abstract plans to be iteratively refined - see next section and PGP section 12

  13. c) Hierarchical distributed planning • Design plans on several levels of abstraction • Use abstract plans • Abstract operator - a kind of macro-operator = sequence of applicable operators Write paper Edit content Read references Organize ideas Edit text ….. Find editor Check for errors Edit figures 13

  14. Hierarchical behavior-space search algorithm 1. Level  0, Agent_List = {Agent1, …, AgentN} 2.for i=1,N do 2.1 Agenti sends description of Gi and Pi to every Agentj, j=1,N, ji 2.2 Agenti gets Gj, Pj from Agentj, j=1,N, ji 2.3if Pi is compatible with {Pj}, j=1,N, ji then Agenti removes itself from Agent_List 3.if Agent_list = { } then exit 4. Be N the new number of agents in Agent_List 4.1 Determine conflicts between {Pi} 4.2if conflicts to be resolved at a lower level then (a) Level  Level + 1 (b) go to step 2 5.5.1 Sort agents in Agent_List 5.2for i=1,N-1, cf. ordering do (a) make Agenti the current superior (b) send Pi to each Agentj, j=i+1, N (c) for j=i+1, N do - Agentj checks compatibility of Pj with Pi and replan, if nec. - Agentj checks compatibility with PK, k=1,i-1 and replan if nec. • A kind of CSP: • - backward checking • - forward checking • Ordering: - what heuristic? Add exit condition for no solution 14

  15. A A C C B B 2.4 Distributed planning and execution Real world: incomplete and incorrect information a) Contingency planning • Conditional planning - deals with incomplete information by constructing a conditional plan that accounts for each possible situation or contingency that could arrive • sensing actions • a context of a plan step, i.e., a union of conditions on the environment that must hold in order for a step to be executed  introduces disjunctive steps + conditional links among plan steps Start on(A,B)clear(C)clear(A) Checkarm(Ag1) armbroken(Ag1) Ask Ag2 to move(A,B,C) armbroken(Ag1) move(A,B,C) Context: armbroken(Ag1) Negotiate with Ag2 for it to achieve move … Plan to achieve on(B,A) Finish on(B,A)on(A,C) 15

  16. b) Execution monitoring • The agent does not execute the plan with "its eyes closed" - It monitors what is happening while it executes the plan and it can do replanning to achieve a goal in a new situation • Conditional planning = thinks before to several alternatives • Monitoring and replanning = defers the job; I shall see what to do if new conditions occur c) Social laws • What actions are legal to be executed in a certain context • Find conflicting situations, analyze what concurrent actions lead to these situations and prohibit such concurrent actions by social laws • It is fit, in general, for loosely coupled subproblems / subplans 16

  17. 3 Partial Global Planning • Initially applied in the Distributed Monitoring Vehicle (DVM) Testbed, then extended to be domain independent • Integrates planning and execution • Coordination by means of partial plans exchange • Partial plans: abstract plans + partial ordering  plan merging • The domain - unpredictable, unreliable information • The tasks are inherently distributed; each agent performs its own task • The agents are not aware of the global state of the system; however there is a common goal: converge on a consistent map of vehicle movements by integrating the partial tracks formed by different agents into a single complete map or into a consistent set of local maps distributed among agents • Cooperative agents (collectively motivated) 17

  18. 3.1 Aircraft monitoring scenario • each type of aircraft produces a characteristic spectrum of acoustic frequencies • signals may be improperly sensed, there is ghosting and environmental noise • there are two agents A and B whose regions of interest overlap; each agent receives data only about its own region, from its acoustic sensor • the goal is to identify any aircraft that is moving through the region of interest, determine their types and track them through regions Data input Final solution 18

  19. 3.2 Agent functioning 1. Represent its own expected activity by a set of local (tentative) plans, at two levels: higher level (abstract plans) and detailed level; local plans may involve alternative actions depending on the result of previous actions and changes in the environment conditional plans; hierarchical plans 2. Communicate abstract local plans to the other agents and get from them such plans  another form of communication 3. Model collective activity of the agents by forming Partial Global Plans and finding out how they can be improved for better coordination • identify when the goals of one or more agents can be considered subgoals of a single global goal  partial global goal • construct a PGP and identify opportunities for improved coordination • search for an improved PGP 4. Based on 3, propose changes to one or more agents' plans  negotiation 5. Modify its local plan according to the proposal and plan what and when results will be communicated to the other agents 19

  20. A: Process 1/2 data Who?: Process 1/2 data 2 types of problem-solving activities: • task-level activities - build a map of vehicle movements • meta-level activities - decide how and with whom to coordinate Result sharing - agents exchange appropriate results at the right time Task sharing - allow agents to propose potential plans that involve the transfer of tasks among them - negotiation A: Process 1/3 data B: Process 1/3 data Who?: Process 1/3 data 20

  21. 3.3 Plan representation A plan represents future activity at two levels of detail: • at the higher level it outlines the major steps it expects to take to achieve its goal - abstract plan • at a detailed level it specifies primitive actions to achieve the next step in the abstract plan; as the plan is executed, new details are added incrementally action D - the set of data to be processed by action P - the set of procedures to be applied to the data Tstart - the estimated start time of the action Tend - the estimated end time of the action abres - an estimate of the characteristics of and confidence in the abstract partial result that will be developed as conclusion of action 21

  22. 3.4 PGP formation and coordination • Each agent is aware of (some) of the other agents in the system and their domain of influence • The agent looks at its current network model: goals, actions and plans of the other agents • It identifies that several agents are working on goals that are pieces of some larger system goal - partial global goal • It then builds the plan-activity-map = plan actions to be executed concurrently by itself and the other agents, including costs and expected results of actions - PGP • The agent searches through alternative orderings of the PGP actions to reduce the time and communication needs - search is performed by hill-climbing using as heuristic merit action rating Criteria for rating the actions: • the action extends a partial result (vehicle tracking hypothesis) • the action produces a partial result that might help some other agents in forming partial results • how long the action is expected to take • From the modified plan-activity-map, the agent builds a solution-construction-graph = how the agents should interact, including specifications about what partial results to exchange and when to exchange them 22

  23. 3.5 Communication and organization • Agents must communicate about their abstract local plans • The knowledge to guide communication forms the Meta-Level Organization: specifies roles and controls communication • For each agent, the MLO specifies: • the agents it has authority over • the agents that have authority over it • the agents that have equal authority • Centralized cordination • Hierarchical coordination • Distributed coordination • The agent must also plan when to communicate task results based on PGP: • when a completed task may be of interest to another agent • build a tree of exchanges such as at the root both agents have the result • communicative actions are included in the agent's local plan 23

  24. References • E.H. Durfee. Distributed problem solving and planning. In Multiagent Systems - A Modern Approach to Distributed Artficial Intelligence, G. Weiss (Ed.), The MIT Press, 2001, p.121-164. • V.R. Lesser. A retrospective view of FA/C distributed problem solving. IEEE Trans. On Systems, Man, and Cybernetics, 21(6), Nov/Dec 1991, p.1347-1362. • E.D. Durfee, V.R. Lesser Partial global planning: A coordination framework for distributed hypothesis formation. IEEE Trans. On Systems, Man, and Cybernetics, 21(5), Sept. 1991, p.1167-1183. • K.S. Decker, V.R. Lesser. Generalizing the partial global planning algorithm. International Journal of Intelligent Cooperative Information Systems, 1(2), 1992, p. 319-346. • S. Russell, P. Norvig. Artificial Intelligence: A Modern Approach. Prentice hall, 1995, Ch. 11, 12, 13. 24

More Related