210 likes | 377 Views
Agents in Complex Adaptive Systems Models. Characteristics. Multiple agents Agents attempt to maximize some measure of immediate value But, in this case, there’s only so much to go around. Agents read their world to create schemes for interpretation and action
E N D
Characteristics • Multiple agents • Agents attempt to maximize some measure of immediate value • But, in this case, there’s only so much to go around
Agents read their world to create schemes for interpretation and action • Schemes, behavior, may change in random or purposeful fashion, ideally to improve the ability of the agent
Complex Adaptive Systems Model • Traffic • Finance • Social organization • Ecologies
Complexity • When collective phenomena cannot be predicted by analyzing individual components • Example: recent computer driven sell-offs
Objects • Agents: similar but heterogenous • Strategies used to generate prediction based on history • Game ( environment )
General Process • Agents randomly pick strategies at the beginning of a game or trial • Each agent has finite memory of recent outcomes • At each new trial, each agent chooses recent best performing strategy • Environment aggregates predictions and determines winning outcome
Actions • 0 or 1 • Mapped to buy, sell, etc. pairs of actions
Memory or History • If m, memory of past choices is 2, then all possible memories become 00, 01, 10, 11
Strategies • If memory is 2, agent needs 4 bits in each strategy string • A strategy is a choice based on history • 0000 means always pick 0 • 1111 means always pick 1 • 1010 means predict 1 if memory is now 11, and predict 0 if history is 10
A strategy’s success is measured by counting correct choices within window of size T • Agents may choose not to act if no strategy recommends itself
Confidence • Each agent has a threshold to represent confidence • Active strategies are those whose successes are above this are used • Agents without active strategies, do not act • Highest one is used, ties broken at random
Winners and Losers • Agents are presented history • Agents make choices • Actual outcome is a result of their choices • outcome = H[ L(t) – n1(t) ] • where H is Heaviside function, equal 1 if x > 0, else 0 • n1 is number of 1s, L is resource level
Interest at the Boundary • Feedback means success will lead to failure • If everybody does the same thing, then that action become less attractive • Total behavior of system can be based on strategies used versus search space
Relevant Characteristics • Senses history • Action selection, based entirely on history • Environment is really just all the other agents • Optimal selection changes over time • Behavior of individual agent will change over time, but somewhat habitual • Learning? Not really since strategies do not change
Works Consulted • Dr. Dobb’s Journal, #341, October 2002, pp. 16-22 • Crowd-anticrowd theory of the Minority Game, M. Hart, P. Jeries and N.F. Johnson Physics Department, Oxford University, Oxford, OX1 3PU, U.K. P.M. Hui Department of Physics, The Chinese University of Hong Kong
Other References • Gell-Mann, M. (1994): The Quark and the Jaguar. (New York: Freeman & Co.). • Holland, J.H. (1995): Hidden Order, (Reading, MA: Addison-Wesley). • Jantsch, E. (1980): The Self-Organizing Universe, Oxford: Pergaman Press. • Lewin, R. (1992): Complexity: Life at the Edge of Chaos. (New York: MacMillan). • Maturana, H. and F. Varela (1992): The Tree of Knowledge (Boston: Shambhala.
Prigogine, I., & I. Stengers (1984): Order Out of Chaos. (New York: Bantam Books). • Waldrop, M.M. (1992): Complexity: The Emerging Science at the Edge of Chaos. (New York: Simon and Schuster).