1 / 35

Multi-Agent Systems Negotiation

Multi-Agent Systems Negotiation. Shari Naik. Negotiation. Inter-agent cooperation Conflict resolution Agents communicate respective desires Compromise to mutually beneficial agreement. Negotiation in Cooperative domains. Jefferey Rosenschein Gilad Zlatkin. Domains.

Download Presentation

Multi-Agent Systems Negotiation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multi-Agent SystemsNegotiation Shari Naik

  2. Negotiation • Inter-agent cooperation • Conflict resolution • Agents communicate respective desires • Compromise to mutually beneficial agreement

  3. Negotiation in Cooperative domains • Jefferey Rosenschein • Gilad Zlatkin

  4. Domains • Distributed problem solving • Distributed but centrally designed AI systems • Global problem to solve • Multiagent systems • Distributed, with different designers • Agents working for different goals • Task Oriented • State Oriented • Worth Oriented

  5. Task Oriented Domain • Non-conflicting jobs • Negotiation : Redistribute tasks to everyone’s mutual benefit • Example - Postmen domain

  6. State Oriented Domain • Goals are acceptable final states • Have side effects - agent doing one action might hinder or help another agent • Negotiation : develop joint plans and schedules for the agents, to help and not hinder other agents • Example – Slotted blocks world

  7. Rates the acceptability of final states Negotiation : a joint plan, schedules, and goal relaxation. May reach a state that might be a little worse that the ultimate objective Example – Multi-agent Tile world Worth Oriented Domain

  8. Task Oriented Domain • Tuple <T, A, c> • T - set of tasks, • A – List of agents • C - cost function from any set of tasks to a real number • Encounter(goal) - a list, T1, … Tn, of finite sets of tasks from the task set T, such that each agent needs to achieve all the tasks in its set.

  9. Building blocks • Precise specification of the domain • Negotiation protocol • Negotiation strategy • Assumptions • Expected Utility Maximizer • Complete Knowledge • No History • Commitments are Verifiable

  10. Domain Definitions • Graph (City Map) G = G(V,E) • v eV => nodes (address / Post office) • e e E => edges (roads) • Weight function (Distance of road) • W : EIN • Letters for agent A : LA • "Agent Li : I e {A,B} • Letters (LA W LB) = f • Cost(L) e IN => weight of minimum weight cycle that starts at PO and visits all vertices of L and ends at PO

  11. Definitions • Deal – Division of LAULB to two disjoint subsets, (DA,DB) such that • DAUDB= LAULB • DAWDB=f • Utility – Difference between the cost of achieving his goal alone and the cost of his part of the deal • Utilityi(DA,DB) = Cost(Li) – Cost(Di)

  12. Properties of a Deal (d) • Individual rational • "{A,B}, Utilityi(d) >= 0 • Pareto optimal – there does not exist another deal d1 such that d1 > d • Negotiation set – set of deals that are individual rational and pareto optimal • P(d) – Product of the two agent utilities from d

  13. Negotiation Protocol • A product maximizing ngotiation protocol • One step protocol • Concession protocol • At t >= 0, A offers d(A,t) and B offers d(B,t), such that • Both deals are from the negotiation set • "i e {A,B} and "t >0, Utilityi(d(i,t)) <= Utilityi(d(i,t-1)) • Negotiation ending • Conflict - Utilityi(d(i,t)) = Utilityi(d(i,t-1)) • Agreement, $j !=i e {A,B},Utilityj(d(i,t)) >= Utilityj(d(j,t)) • Only A => agree d(B,t) • Only B => agree d(A,t) • Both A,B => agree d(k,t) such that P(d(k))=max{P(d(A)),P(d(B))} • Both A,B and P(d(A))=P(d(B)) => flip a coin Pure deals Mixed deal

  14. Negotiation Strategies • How an agent should act given a set of rules. • Definition – Function from the history of the negotiation to the current message • Risk - an indication of how much an agent is willing to risk a conflict by sticking to its last offer • Risk(A,t) = Utility, A loses accepting B’s offer Utility, A loses by causing a conflict • Risk  Loss • Rational Negotiation Stratergy – At any step t+1, A sticks to his last offer if, Risk(A,t) > Risk(B,t)

  15. Negotiation Strategies Cont • Zeuthen Strategy – • Start – A offers B the minimal offer • UtilityB(d(A,1)) = mindeNS{UtilityB(d) } • Next - A will make a minimal sufficient concession at step t+1 iff Risk(A,t)<=Risk(B,t) • If both agents follow the above stratergy, they will agree on a deal d* e NS, such that P(d*)=maxdeNS {P(d)}

  16. Equilibrium • A negotiation strategy s will be in equilibrium if under the assumption that A uses s, B prefers s to any other strategy • Zeuthen strategy is not in equilibrium

  17. Mixed deal • Element of probability – Agents will perform (DA,DB) with probability p or (DA,DB) with probability 1-p • Costi([(DA,DB):p]) = pCost(Di) + (1-p)Cost(Dj) • Utilityi([d:p]) = Cost(Li) – Costi([d:p]) • All or nothing deal – 0<=p<=1 such that • mixed deal m = [({LA,LB}, f ):p] e NS • P(m) = maxdeNSp(d)

  18. Incomplete Information • G and w – common knowledge • i knows Li, not Lj : j!=I • Solution • Exchange missing information • Penalty for lie • Possible lies • False information • Hiding letters • Phantom letters • Not carry out a commitment

  19. Hidden letters • Utility of A • Expected(on telling the truth) = 4 • Pure deal – [(d,f):1/2] = 6 • Mixed deal - [(d,f):3/8] = 33/4

  20. Phantom letters • Utility of A • Expected(on telling the truth) = 3 • Pure deal – [(d,f):1/2] = 4 • Mixed deal – possibility of being caught (all or nothing deal)

  21. Subadditive Task Oriented Domain • the cost of the union of tasks is less than or equal to the sum of the costs of the separate sets • for finite X,Y in T, c(X U Y) <= c(X) + c(Y)). • Example of non additive TOD

  22. Incentive compatible Mechanism • L lying is beneficial • T  Honesty is better • T/P  Lying can be beneficial, but chances of being caught

  23. Concave Task Oriented Domain • We have 2 tasks X and Y, where X is a subset of Y • Another set of task Z is introduced • c(X U Z) - c(X) >= c(Y U Z) - c(Y).

  24. Modular TOD • c(X U Y) = c(X) + c(Y) 2 c(X WY).

  25. Multi Agent Compromise via Negotiation • Katia Sycara

  26. Negotiation process for conflicting goals • Identify potential interactions • Modify intentions to avoid harmful interactions or create cooperative situations • Techniques required • Representing and maintaining belief models • Reasoning about other agents beliefs • Influencing other agents intentions and beliefs

  27. PERSUADER • Program to resolve problems in labor relations domain • Agents • Company • Union • Mediator • Tasks • Generation of proposal • Generation of counter proposal based on feedback from dissenting party • Persuasive argumentation

  28. Negotiation Methods • Case based Reasoning • Preference analysis

  29. Case Based Reasoning • Uses past negotiation experiences as guides to present negotiation • Process • Retrieve appropriate precedent cases from memory • Select the most appropriate case • Construct and appropriate solution • Evaluate solution for applicability to current case • Modify the solution appropriately

  30. Case Based Reasoning • Cases organized and retrieved according to conceptual similarities. • Advantages • Minimizes need for information exchange • Avoids problems by reasoning from past failures. Intentional reminding. • Repair for past failure is used. Reduces computation.

  31. Preference Analysis • From scratch planning method • Based on multi attribute utility theory • Gets a overall utility curve out of individual ones. • Expresses the tradeoffs an agent is willing to make. • Property of the proposed compromise • Maximizes joint payoff • Minimizes payoff difference

  32. Persuasive argumentation • Argumentation goals • Ways that an agents beliefs and behaviors can be affected by an argument • Increasing payoff • Change importance attached to an issue • Changing utility value of an issue

  33. Narrowing differences • Gets feed back from rejecting party • Objectionable issues • Reason for rejection • Importance attached to issues • Increases payoff of rejecting party by greater amount than reducing payoff for agreed parties.

  34. Experiments • Without Memory – 30% more proposals • Without argumentation – lesser proposals and better solutions • No failure avoidance – more proposals with objections • No preference analysis – Oscillatory condition • No feedback – communication overhead by 23%

  35. Thank You

More Related