1 / 47

Agent Activation Regimes

Agent Activation Regimes. Rob Axtell Brookings Institution Santa Fe Institute George Mason University. Agents as Simulation Systems. Discrete Event Simulation Population of objects Objects ‘wired’ together in some fashion Objects act/interact upon ‘events’ External events

zena
Download Presentation

Agent Activation Regimes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Agent Activation Regimes Rob Axtell Brookings Institution Santa Fe Institute George Mason University

  2. Agents as Simulation Systems • Discrete Event Simulation • Population of objects • Objects ‘wired’ together in some fashion • Objects act/interact upon ‘events’ • External events • Internal events • Objects not (usually) autonomous • Discrete Time Simulation • Often discretization of continuous time system Agent models have features of both

  3. Who Interacts with Whom? • Need to specify how agents are activated • Serial or parallel? • Serial: uniform, random, Poisson clock • Parallel: synchronous or asynchronous? • Need to specify the graph of interaction • If data available, use it! In what follows, a population of A agents

  4. Review of terms… • Serial: Agents act one at a time • Parallel: • Synchronous: All agents acts in lock-step, with previous period’s state information (e.g., CAs) • Partially asynchronous: Agents act in parallel and communicate as possible (delays bounded) • Fully asynchronous: Agents act in parallel without any guarantees on delays

  5. Review of terms… • Serial: Agents act one at a time • Parallel: • Synchronous: All agents acts in lock-step, with previous period’s state information (e.g., CAs) • Partially asynchrounous: Agents act in parallel and communicate as possible (delays bounded) • Fully asynchronous: Agents act in parallel without any guarantees on delays

  6. Nowak and MayvsHuberman and Glance, I • Context: Early ‘90s, microcomputer color graphics just possible, spatial games a new idea • Nowak and May in Nature: Ostensibly showed that a spatially-extended PD game could support large-scale cooperation • Theory? Screen snapshot!

  7. Nowak and May Red and yellow are defectors, green and blue are cooperators

  8. Nowak and MayvsHuberman and Glance, II • Huberman and Glance (PNAS): This result is an artifact of synchronous updating in the model

  9. Nowak and MayvsHuberman and Glance, III • Nowak and May responded that synchronization was common in biological systems • Huberman and Glance answered that this rationale does not apply to human social systems

  10. Uniform Activation: Idea • A period is defined by allA agents being activated exactly once • Feature: No agent inactive • Problem: Calling agents in same order could create artifacts • Fix: Randomize the order of agent activation • Cost: Expensive to randomize? • Unknown: How much randomization is OK? • Examples: Sugarscape, many early agent models

  11. Uniform Activation: Implementation, I Agent 1 state 1 state 2 … behavior1() behavior 2() … Agent 2 state 1 state 2 … behavior1() behavior 2() … Agent 3 state 1 state 2 … behavior1() behavior 2() … Agent 4 state 1 state 2 … behavior1() behavior 2() … Agent A state 1 state 2 … behavior1() behavior 2() … Agent array System memory

  12. Uniform Activation: Implementation, I Agent 1 state 1 state 2 … behavior1() behavior 2() … Agent 2 state 1 state 2 … behavior1() behavior 2() … Agent 3 state 1 state 2 … behavior1() behavior 2() … Agent 4 state 1 state 2 … behavior1() behavior 2() … Agent A state 1 state 2 … behavior1() behavior 2() … Activate the population at agent 1, sequentially Problem: Agent 1 always gets to move first

  13. Uniform Activation: Implementation, I Agent 1 state 1 state 2 … behavior1() behavior 2() … Agent 2 state 1 state 2 … behavior1() behavior 2() … Agent 3 state 1 state 2 … behavior1() behavior 2() … Agent 4 state 1 state 2 … behavior1() behavior 2() … Agent A state 1 state 2 … behavior1() behavior 2() … Pick random starting point Problem: Agent i always moves before agent i+1

  14. Uniform Activation: Implementation, I Agent 1 state 1 state 2 … behavior1() behavior 2() … Agent 2 state 1 state 2 … behavior1() behavior 2() … Agent 3 state 1 state 2 … behavior1() behavior 2() … Agent 4 state 1 state 2 … behavior1() behavior 2() … Agent A state 1 state 2 … behavior1() behavior 2() … Pick random starting point AND random direction Problem: Still correlation between i and i+1

  15. Uniform Activation: Implementation, I Solution: Array/list randomization together with random starting point and random direction Agent 1 state 1 state 2 … behavior1() behavior 2() … Agent 2 state 1 state 2 … behavior1() behavior 2() … Agent 3 state 1 state 2 … behavior1() behavior 2() … Agent 4 state 1 state 2 … behavior1() behavior 2() … Agent A state 1 state 2 … behavior1() behavior 2() …

  16. Uniform Activation: Implementation, II—Efficiency Agent array Pointer to Agent 1 Pointer to Agent 2 Pointer to Agent 3 Pointer to Agent 4 Pointer to Agent A Agent A state 1 state 2 … behavior1() behavior 2() … Agent 1 state 1 state 2 … behavior1() behavior 2() … Agent 3 state 1 state 2 … behavior1() behavior 2() … Agent 4 state 1 state 2 … behavior1() behavior 2() … Agent 2 state 1 state 2 … behavior1() behavior 2() … System memory

  17. Uniform Activation: Implementation, II—Efficiency Agent array Pointer to Agent 1 Pointer to Agent 4 Pointer to Agent 3 Pointer to Agent 2 Pointer to Agent A Agent A state 1 state 2 … behavior1() behavior 2() … Agent 1 state 1 state 2 … behavior1() behavior 2() … Agent 3 state 1 state 2 … behavior1() behavior 2() … Agent 4 state 1 state 2 … behavior1() behavior 2() … Agent 2 state 1 state 2 … behavior1() behavior 2() … System memory

  18. Uniform Activation: Implementation, IIIHow Much ‘Shuffling’ to Do? Case 1: Neighbors swapped Agent array Pointer to Agent 1 Pointer to Agent 2 Pointer to Agent 3 Pointer to Agent 4 Pointer to Agent 5 Pointer to Agent 6 Result: 4 agents have 1 new neighbor each

  19. Uniform Activation: Implementation, IIIHow Much ‘Shuffling’ to Do? Case 2: Agents with common neighbor swapped Agent array Pointer to Agent 1 Pointer to Agent 2 Pointer to Agent 3 Pointer to Agent 4 Pointer to Agent 5 Pointer to Agent 6 Result: 2 agents have 1 new neighbor each 2 (moving) agents have 2 new neighbors each

  20. Uniform Activation: Implementation, IIIHow Much ‘Shuffling’ to Do? Case 3: Agents distant from one another swapped Agent array Pointer to Agent 1 Pointer to Agent 2 Pointer to Agent 3 Pointer to Agent 4 Pointer to Agent 5 Pointer to Agent 6 Result: 4 agents have 1 new neighbor each 2 (moving) agents have 2 new neighbors each

  21. Uniform Activation: Implementation, III—Shuffling To give 1/2 of the agents 1 new neighbor, shuffle ~ 25% of agents To give 1/2 of the agents 2 new neighbors, shuffle ~ 50% of agents

  22. Random Activation: Idea • Agents are selected to be active with uniform probability • A period is defined by A agents being activated • Feature: Super efficient to implement • Cost: Not all agents are active each period • Unknown: When different from uniform activation? • Examples: Zero-intelligence traders, bilateral exchange, Axelrod culture model, many others

  23. Random Activation: Analysis, I At each ‘click’ the probability a specific agent is activated is 1/A Over K activations—K/A periods—the probability an agent is activated exactly i times is The mean number of activations is K/A The variance in the number of activations across the agent population is K(A-1)/A2 K/A, the mean The skewness is (A-2)/K(A-1)

  24. Random Activation: Analysis, II Call w the number of ‘clicks’ an agent has to wait to be activated, the waitingtime. It’s distribution is The expected value is A-1 and the variance is A(A-1) In terms of time: Let us say that T periods have gone by, thus K = TA The mean number of activations is T, as we expect The variance is now T(A-1)/A T The coefficient of variation is (A-1)/A 1 The skewness becomes (A-2)/TA(A-1) 1/T

  25. Random Activation: Analysis, III Generalization: k agents active at each ‘click’ 1 period still consists of A clicks Pr[a specific agent is active at any click] = k/A Pr[agent activated exactly i times over T periods]: Mean number of activations is kT Variance is kT(A-k)/A Coefficient of variation is (A-k)/A Skewness is (A-2k)/kTA(A-k)

  26. Random Activation: Example Bilateral exchange model: Distribution of agent activations with random pairings; k = 2, A = 100, T = 1000

  27. Comparison of Uniform and Random Activation Regimes • Example 1: Axelrod culture model • Replication with Sugarscape model • Qualitative replication succeeded, quantitative replication failed • Converted Sugarscape agent activation from uniform to random, then quantitative replication worked! • Example 2: Firm formation model

  28. Poisson Clock Activation: Idea • Story: Each agent has an internal clock that wakes it up periodically • A period is defined as the amount of ‘wall time’ such that A agents are active on average • Feature: ‘True’ agent autonomy • Disadvantage: Agents must be sorted each period • Examples: Sugarscape reimplementation, game theory models

  29. Poisson Clock Activation: Implementation • Specify at the beginning that the model will be run for T periods • At time 0, for each agent draw T random numbers as ti+1 = ti - log(U[0,1]) • Now, sort these AT random numbers to develop a schedule of agent activation: • Naïve sort scales like N2 • Quicksort goes like N log(N)

  30. Poisson Clock Activation: Analysis Over T periods, the mean number of activations/agent = T The variance is also T Skewness and kurtosis are each 1 Now, the number of agents, n, active each period is a random variable having pmf Assumes a nearly Gaussian shape for large A

  31. Comparison of Uniform and Poisson Activation Regimes • Sexual reproduction runs of Sugarscape can yield large-amplitude fluctuations • Computer scientists reimplementing Sugarscape attempted to replicate this finding • Results negative, i.e., not robust to activation regime

  32. Agent Activation Intercomparison

  33. Preferential Activation [Page 97] • What if agent activation were not so egalitarian? • What if agents could user their resources to ‘buy’ activations? • Could successful agents gain further (positive feedback)? • Perhaps a firm can be thought of in this way • No definitive results but clearly this matters

  34. Lessons • Activation regime may matter, especially for the quantitative character of your results • Dr. Pangloss world: robustness of each result would be tested with each activation regime • Easy in Ascape, MASON, RePast • Not easy in NetLogo

  35. Does Agent Activation Regime Always Matter? • Gacs [1997]: Technical requirements for updating not to matter • Istrate [forthcoming, Games and Economic Behavior]: When models are formally ergodic, the asymptotic states are shown to be independent of agent activation schemes

  36. How to Activate Agents with Many Rules? • So far, agents only have had 1 behavior • Now, agents have multiple behaviors, e.g., movement, gathering, trading, procreation • Previous problem now intra-agent: • Uniform activation • Random activation • Poisson clock activation Agent i: rule A rule B rule C

  37. Agents with Multiple Rules • In Ascape, other agent-modeling platforms, there are software switches to either • Execute all agent rules when agent activated Agent i: rule A rule B rule C Agent j: rule A rule B rule C Agent k: rule A rule B rule C

  38. Agents with Multiple Rules • In Ascape, other agent-modeling platforms, there are software switches to either • Execute all agent rules when agent activated • Execute all agents on a particular rule and repeat for other rules Agent i: rule A rule B rule C Agent j: rule A rule B rule C Agent k: rule A rule B rule C

  39. Agents with Multiple Rules • In Ascape, other agent-modeling platforms, there are software switches to either • Execute all agent rules when agent activated • Execute all agents on a particular rule and repeat for other rules Agent i: rule A rule B rule C Agent j: rule A rule B rule C Agent k: rule A rule B rule C

  40. Agents with Multiple Rules • In Ascape, other agent-modeling platforms, there are software switches to either • Execute all agent rules when agent activated • Execute all agents on a particular rule and repeat for other rules Agent i: rule A rule B rule C Agent j: rule A rule B rule C Agent k: rule A rule B rule C

  41. Agents with Multiple Rules • In Ascape, other agent-modeling platforms, there are software switches to either • Execute all agent rules when agent activated • Execute all agents on a particular rule and repeat for other rules Agent i: rule A rule B rule C Agent j: rule A rule B rule C Agent k: rule A rule B rule C

  42. Agents with Multiple Rules • In Ascape, other agent-modeling platforms, there are software switches to either • Execute all agent rules when agent activated • Execute all agents on a particular rule and repeat for other rules Agent i: rule A rule B rule C Agent j: rule A rule B rule C Agent k: rule A rule B rule C

  43. Agents with Multiple Rules • In Ascape, other agent-modeling platforms, there are software switches to either • Execute all agent rules when agent activated • Execute all agents on a particular rule and repeat for other rules Agent i: rule A rule B rule C Agent j: rule A rule B rule C Agent k: rule A rule B rule C

  44. Agents with Multiple Rules • In Ascape, other agent-modeling platforms, there are software switches to either • Execute all agent rules when agent activated • Execute all agents on a particular rule and repeat for other rules Agent i: rule A rule B rule C Agent j: rule A rule B rule C Agent k: rule A rule B rule C

  45. Agents with Multiple Rules • In Ascape, other agent-modeling platforms, there are software switches to either • Execute all agent rules when agent activated • Execute all agents on a particular rule and repeat for other rules Agent i: rule A rule B rule C Agent j: rule A rule B rule C Agent k: rule A rule B rule C

  46. Agents with Multiple Rules • In Ascape, other agent-modeling platforms, there are software switches to either • Execute all agent rules when agent activated • Execute all agents on a particular rule and repeat for other rules Agent i: rule A rule B rule C Agent j: rule A rule B rule C Agent k: rule A rule B rule C

  47. Reality: Each Agent on Own ‘Thread’ • Serial execution getting ‘messy’ so why not just move asynchronous parallel execution? • Now need rules for collisions: • Avoidance: collision immanent, so flip coin, say • Adjudication: collision has happened, resolve it • Social institutions ‘solve’ these problems in reality • Debugging can be difficult • Multi-threading: debugging very difficult

More Related