1 / 0

Graphical Multiagent Models

Graphical Multiagent Models. Quang Duong Computer Science and Engineering Chair: Michael P. Wellman. Example: Election In The City Of AA. May, political analyst. Political discussion. Phone surveys Demographic information Party registration …. V ote. Modeling Objectives.

shona
Download Presentation

Graphical Multiagent Models

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Graphical Multiagent Models

    Quang Duong Computer Science and Engineering Chair: Michael P. Wellman
  2. Example: Election In The City Of AA May, political analyst Political discussion Phone surveys Demographic information Party registration … Vote
  3. Modeling Objectives Construct a model that takes into account people (agent) interactions (graph edges) in: Representing joint probability of all vote outcomes* Computing marginal and conditional probabilities Vote Republican or Democrat?
  4. Modeling Objectives (cont.) Generate predictions: Individual actions, dynamic behavior induced by individual decisions Detailed or aggregate
  5. More Applications Of Modeling Multiagent Behavior Computer Network/ Internet Financial Institutions Social Network
  6. Challenges: Uncertainty from the system modeler’s perspective 1a. Agent choice Vote for personal favorite or conform with others? 1b. Correlation Will the historic district of AA unanimously pick one candidate to support? 1c. Interdependence May does not know all friendship relations in AA
  7. Challenges: Complexity 2a. Representation and inference Number of all action configurations (all vote outcomes) is exponential in the number of agents (people). 2b. Historical information People may change their minds about whom to vote for after discussions.
  8. Existing Approaches That This Work Builds On Game-theory Approach: Assume game structure/perfect rationality Statistical Modeling Approach: Aggregate statistical measures/ make simplifying assumptions
  9. Approach Outline Graphical Multiagent Models (GMMs) areprobabilistic graphical modelsdesigned to Facilitate expressions of different knowledge sources about agent reasoning Capture correlated behaviors while Exploiting dependence structure uncertainty complexity
  10. Roadmap (Ch. 3) GMM (static) (Ch. 4) History-Dependent GMM (Ch. 2) Background (Ch. 2) Background (Ch. 5) Learning Dependence Graph Structure (Ch. 6) Application: Information Diffusion
  11. Multiagent Systems nagents {1,…,i,…,n} Agent ichooses actionai, joint action (action configuration)of the system: a = (a1,…,an) In dynamic settings: time period t, time horizon T. historyHt of history horizon h, Ht= (at-h,…,at-1)
  12. Game Theory Each player (agent i) chooses a strategy (action ai). Strategy profile (joint action a) of all players. Payoff function: ui(ai,a-i) Player i‘s regretεi(a): maximum gain if player i chooses strategy ai’, instead of strategy ai, given than everyone else fixes their strategies. a* is a Nash equilibrium(NE) if for every player i, regret εi(a) = 0.
  13. Graphical Representations of Multiagent Systems Graphical Game Models [Kearns et al. ‘01] An agent’s payoff depends on strategy chosen by itself and its neighbors Ji Payoff/utility: ui(ai,aJi) Similar approaches: Multiagentinfluence diagrams (MAIDs) [Koller & Milch’03] Networks of Influence Diagrams [Gal & Pfeffer’08] Action-graph games [Jiang et al ‘11].
  14. Graphical Representations (cont.) 2. Probabilistic graphical models Markov random field (static) [Kindermann & Laurie ’80, KinKoller & Friedman ‘09] Dynamic Bayesian Networks [Kanazawa & Dean ’89, Ghahramani’98]
  15. This Work Building on incorporating Probabilistic Graphical Models demonstrate and examine the benefits of applying probabilistic graphical models to the problem of modeling multiagent behavior in scenarios with different sets of assumptions and information available to the system modeler. Game Models
  16. Roadmap (Ch. 3) GMM (static) (Ch. 4) History-Dependent GMM (Ch. 2) Background 1. Overview 2. Examples 3. Knowledge Combination 4. Empirical Study (Ch. 5) Learning Dependence Graph Structure (Ch. 6) Application: Information Diffusion
  17. Graphical Multiagent Models (GMMs) 2 7 4 [Duong, Wellman & Singh ‘08] Nodes: agents. Edges: dependencies among agent actions Dependence neighborhood Ni 6 11 3 5
  18. GMMs Factor joint probability distribution into neighborhood potentials. (Markov random field for graphical games [Daskalakis & Papadimitriou ’06]) Joint probability distribution of system’s actions potential of neighborhood’s joint actions Pr(a) ∝Πi πi(aNi)
  19. Example GMMs Markov Random Field for computing pure strategy Nash equilibrium Markov Random Field for computing correlated equilibrium Information diffusion GMMs [Ch. 6] Regret GMMs [Ch. 3]
  20. Examples: Regret potential Assume a graphical game Regret ε(aNi) πi(aNi) = exp(-λεi(aNi)) Illustration: Assume: prefers Republican to Democrat (fixing others’ choices) Near zero λ: picks randomly Larger λ: more likely to pick Republican
  21. Flexibility: Knowledge Combination Assume known graph structures, given GMMs G1 and G2 that represent 2 different knowledge sources Heuristic Rule-based GMM hG GMM 2 GMM 1 Regret GMM reG Knowledge Combination Direct update Opinion pool Mixing data Final GMM finalG
  22. Empirical Study ratio > 1: combined model performs better than input model Mixing data GMM vs. regret GMM Mixing data GMM vs. heuristic GMM Combining knowledge sources in one GMM improves predictions Combined models fail to improve on input models when input does not capture any underlying behavior
  23. Summary Of Contributions (Ch. 3) (I.A) GMMs accommodate expressions of different knowledge sources (I.B) This flexibility allows the combination of models for improved predictions
  24. Roadmap (Ch. 3) GMM (static) (Ch. 4) History-Dependent GMM (Ch. 2) Background 1. Consensus Dynamics 2. Description 3. Joint vs. individual behavior 4. Empirical study (Ch. 5) Learning Dependence Graph Structure (Ch. 6) Application: Information Diffusion
  25. Example: Consensus Dynamics [Kearns et al. ’09] abstracted version of the AA mayor election example 2 Examine the ability to make collective decisions with limited communication and observation 5 3 1 Observation graph Agent 1’s perspective 6 4
  26. Network structure here plays a large role in determining the outcomes time
  27. Modeling MultiagentBehavior In Consensus Dynamics Scenario Time series action data + observation graph 2. Predict aggregate measures 1. Predict detailed actions or time
  28. History-Dependent Graphical Multiagent Models (hGMMs) [Duong, Wellman, Singh & Vorobeychik’10] We condition actions on abstracted history Ht Note: dependence graphs can be different from observation graphs. 1 1 1 t-1 t t+1
  29. hGMMs (Undirected) within-time edges: dependencies between agent actions in the same time period, and define dependence neighborhoodNi for each agent i. A GMM at every time t 1 1 1 t-1 t t+1
  30. hGMMs (Directed) across-time edges: dependencies of agent i’s action on some abstraction of prior actions by agents in i’s conditioning setΓi Example: frequency function. 1 1 1 t-1 t+1
  31. hGMMs potential of neighborhood’s joint actions at t Joint probability distribution of system’s actions at time t Pr(at | H) ∝Πi πi(atNi | HtΓi) history of the conditioning set
  32. Challenge: Dependence Conditional independence Dependence induced by history abstraction/summarization (*) 2 2 2 2 2 2 1 1 1 1 1 1 t-2 t-1 t t-2 t-1 t
  33. Individual vs. Joint Behavior Models Given completehistory, autonomous agents’ behaviors are conditionally independent Individual behavior models: πi(ati | HtΓi,complete) Joint behavior modelsallow specifying any action dependence within one’s within-timeneighborhood, given some (abstracted) history πi(atNi | HtΓi,abstracted)
  34. Empirical Study: Summary Evaluation: compares joint behavior and individual behavior models by likelihood of testing data (time-series votes) * Observation graph defines both dependence neighborhoods N and conditioning sets Γ Joint behavior outperform individual behavior models for shorter history lengths, which induce more action dependence. Approximation does not deteriorate performance
  35. Summary Of Contributions (Ch. 4) (II.A) hGMMs support inference about system dynamics (II.B) hGMMs allow the specification of action dependence emerging from history abstraction
  36. Roadmap (Ch. 3) GMM (static) (Ch. 4) History-Dependent GMM (Ch. 2) Background (Ch. 5) Learning Dependence Graph structure 1. Learning Graphical Game Models 2. Learning hGMMs (Ch. 6) Application: Information Diffusion
  37. Learning History-Dependent Graphical Multiagent Models Objective Given action data + observation graph, build a model that predicts: Detailed actions in next period Aggregate measures of actions in the more distant future Challenge: Learn dependence graph (Within-time) Dependence graph ≠ observation graph Complexity of the dependence graph
  38. Consensus Dynamics Joint Behavior Model Extended JointBehavior hGMM(eJCM) πi(aNi | HtΓi) = ri(aNi)f(ai, HtΓi)γΙ(ai, Hti)β ri(aNi) = reward for action ai, discounted by the number of dissenting neighbors in Ni frequency of ai chosen previously by agents in the conditioning set Γi inertia proportional to how long i has maintained its most recent action 1 2 3
  39. Consensus Dynamics Individual Behavior Models 1. Extended IndividualBehaviorhGMM(eICM): similar to eJCMbut assumes that Nicontains ionly πi(ai | HtΓi) = Pr(ai | HtΓi) ∝ ri(ai)f(ai, HtΓi)γΙ(ai, Hti)β 2. Proportional Response Model (PRM): only incorporates the most recent time period [Kearns et al., ‘09]: Pr(ai | HtΓi) ∝ ri(ai)f(ai, HtΓi) 3. Sticky Proportional Response Model (sPRM)
  40. Learning hGMMS Input: <action observations (time series)> observation graph Search space: Model parameters γ, β 2.Within-time edges Output: hGMM Objective: likelihood of data Constraint: max node degree
  41. Greedy Learning Initialize the graph with no edges Repeat: Add edges that generate the biggest increase (>0) in the training data’s likelihood Until no edge can be added without violating the maximum node degree constraint
  42. Empirical Study: Learning from human-subject data Use asynchronous human-subject data Vary the following environment parameters: Discretization intervals, delta (0.5 and 1.5 seconds) History lengths, h Graph structures/payoff functions: coER_2, coPA_2, &power22 (strongly connected minority) Goal: evaluate eJCM, eICM, PRM, and sPRM using 2 metrics Negative likelihood of agents’ actions Convergence rates/outcomes
  43. Predicting Dynamic Behavior eJCMs and eICMsoutperform the existing PRMs/sPRMs eJCMspredict actions in the next time period noticeably more accurately than PRMs and sPRMs, and (statistically significantly) more accurate than eICMs
  44. Predicting Consensus Outcomes eJCMs have comparable prediction performance with other models in 2 settings: coER_2 and coPA_2. In power22, eJCM predict consensus probability and colors much more accurately.
  45. Graph Analysis In learned graphs, intra edges >> inter edges. In power22, a large majority of edges are intra red  identify the presence of a strongly connected red minority
  46. Summary Of Contributions (Ch. 5.2) (II.B) [revisit] This study highlights the importance of joint behavior modeling (III.C) It is feasible to learn bothdependence graph structure and model parameters (III.D) Learned dependence graphs can be substantially different from observation graphs
  47. Modeling Multiagent Systems: Step By Step Given as input Dependence graph structure Observation graph structure Learn from data GMM hGMM Potential function Approximation Intuition, background information
  48. Roadmap (Ch. 3) GMM (static) (Ch. 4) History-Dependent GMM (Ch. 2) Background 1. Definition 2. Joint behavior modeling 3. Learning missing edges 4. Experiments (Ch. 5) Learning Dependence Graph structure (Ch. 6) Application: Information Diffusion
  49. Networks with Unobserved Links True network G* Links facilitate how information diffuses from one node to another Real-world nodes have links unobserved by third parties Observed Network G
  50. Problem [Duong, Wellman & Singh ‘11] Given: a network (with missing links) and snapshots of the network states over time. Objective: model information diffusions on this network Network G Diffusion traces (on G*)
  51. Approach 1: Structure Learning Recover missing edges Learn network G’ Learn parameters of an individual behavior model built on G’ Learning algorithms: NetInf[Gomez-Rodriguez et al. ’10] and MaxInf
  52. Approach 2: Potential Learning Construct an hGMMon G without recovering missing links hGMMs allow capturing state correlationsbetween neighbors who appear disconnected in the input network Theoretical evidence [6.3.2] Empirical illustrations: hGMMs outperform individual behavior models on learned graph random graph with sufficient training data preferential attachment graph (varying amounts of data)
  53. Summary of Contributions (Ch. 6) (II.C) Joint behavior hGMM, can capture state dependence caused by missing edges
  54. Conclusions 1. The machinery of probabilistic graphical models helps to improve modeling in multiagent systems by: allowing the representation and combination of different knowledge sources of agent reasoning relaxing assumptions about action dependence (which may be a result of history abstraction or missing edges) 2. One can learn from action data both: (i) model parameters, and (ii) dependence graph structure, which can be different from interaction/observation graph structure
  55. Conclusions (cont.) 3. The GMM framework contributes to the integration of: strategic behavior modeling techniques from AI and economics probabilistic models from statistics that can efficiently extract behavior patterns from massive amount of data for the goal of understanding fast-changing and complex multiagent systems.
  56. Summary Graphical multiagent models: flexibility to represent different knowledge sources and combine them [UAI ’08] History-dependent GMM: capture dependence in dynamic settings [AAMAS ’10, AAMAS ’12] Learning graphical game models [AAAI ’09] Learning hGMM dependence graph, distinguishing observation/interactions graphs and probabilistic dependence graphs [AAMAS ‘12] Modeling information diffusion in networks with unobserved links [SocialCom ‘11]
  57. Acknowledgments Advisor: Professor Michael P. Wellman Committee members: Prof. Satinder Singh Baveja, Prof. Edmund H. Durfee, and Asst. Prof. Long Nguyen Research collaborators: YevgeniyVorobeychik (Sandia Labs), Michael Kearns (U Penn), Gregory Frazier (Apogee Research), David Pennock and others (Yahoo/Microsoft Research) Undergraduate advisor: David Parkes. Family Friends CSE staff
  58. THANK YOU!
More Related