1 / 42

ABM: Issues

ABM: Issues. Dr Andy Evans. Structuring a model. Models generally comprise: Objects. Environment. I/O code. Data reporting code. Some kind of time sequencing. Some kind of ordering of processing within a time step. Some kind of decision making and/or rulesets.

amoore
Download Presentation

ABM: Issues

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ABM: Issues Dr Andy Evans

  2. Structuring a model Models generally comprise: Objects. Environment. I/O code. Data reporting code. Some kind of time sequencing. Some kind of ordering of processing within a time step. Some kind of decision making and/or rulesets. In agent models these are all relatively explicit.

  3. Structural issues with modelling Artefacts Timing Data Geography Model Construction Frameworks Decision making frameworks

  4. Artefacts Say we had a time step in which cells became x when a neighbour up and left was x. If we calculate from the top left corner and run across each row, we get this: We see that in one time step, the third cell shouldn’t be x. We’d get a different result if we ran from the lower right. x x x x x x

  5. Artefacts With CAs this seems simple. You generally copy the entire grid, then put changes into another grid temporarily while doing the calculations. This is then copied back into the CA array when everything is complete. This prevents you acting on changed neighbours. However, with large agent models we don’t want to be copying the whole system each time. So, we might just wait until the end of a step and update everything. This is certainly one solution.

  6. Synchronous scheduling In general models will have a global time sequence of time-steps (sometimes called ticks in Logo-based systems) maintained by a clock. The agents are told to enact their behaviour at given time-steps by a scheduler. When there is one global time and all agents act at this speed, none getting ahead in the sequence, this is known as synchronisation. All agents broadly see it as the same time. Synchronisation doesn’t, however, need every agent to do something each time-step: this would unnecessarily increase processing requirements. Different agents may run on different timescales (every five ticks, for example).

  7. Event-based scheduling Different agents may run based on event triggers. In general, these still work within a synchronised time-series. Either way, as with our array problem, we need to make sure updates of contacted agents are either complete or stored for later depending on our model. However, this is often a hard choice. Does a housebuyer always arrive too late to put in an offer? Sometimes they must arrive in time. In general, therefore, we randomise the order of agent steps.

  8. Parallel synchronisation As it is for agents, so it is for parallel models – only more so. Parallel developers think hard about synchronisation. Most models adopt conservative synchronisation of more or less strength, that is, there is some central scheduler that coordinates jobs around the global time. We need to avoid processes happening out of sync for most systems. This is particularly the case in geographical systems, which tend to be based on Markov processes / causality.

  9. Lookahead However, some processing can be done without communication. It is also the case that events/processes can be stacked and run through in sequence at whatever speed a processor can until it needs to respond. The difference between now and the first necessary communication out is known as the lookahead. A lot of effort goes into working out what this is and how much can be done in it.

  10. Optimistic scheduling Optimistic scheduling plays fast and loose with the amount that is done in this time, but then recovers if a critical causal event needs to be injected into a process. For example, in the Time Warp Algorithm, processing is rolled-back if necessary, including any messages sent. As you can imagine, this gets fairly complicated, and these algorithms only really work where there is history path-independence in the system.

  11. Monte Carlo sampling It is also the case that if we have input data repeatedly entering the model, this can lead to harmonic artefacts. Data running through the system can lead to internal dynamics that cycle at the same periodicity producing artefacts; this is particularly problematic with self-adjusting systems. We need some randomisation. In general we sample input data randomly, even if we act to include certain periodicities (e.g. seasonal). We sample weighted by distribution – so-called Monte Carlo sampling. The same thing happens with geographical variation, so we randomly sample agents occurring in space too (though more rarely based on a distribution).

  12. Stochastic models Deterministic models always end up with the same answer for the same inputs. Stochastic models include some randomisation, either of data or behaviour. As we have seen, these are usually quantified by repeated runs. However, sometimes we want to run under the same “random” conditions (e.g. to understand some behaviour). Standard computers without external inputs never generate truly random numbers – they are random-like sequences, adjusted by some random seed. If we record the random seed, we can re-run the “random” model exactly.

  13. Space Boundary types: Infinite. Bound. Torus. Other topologies. Organisation: Continuous. Grid (square; triangular; hexagonal; other) Irregular areas, or quad-tree. Network Neighbourhoods: Moore Von Neumann Diagonal Euclidian Network

  14. GIS and Agent Systems Problem is GIS inherently static GIS data model represents a single point in time Some work into a temporal GIS data model, but no widespread solutions But time essential in an ABM: Need to link GIS and ABM Two approaches: loose vs tight/close coupling

  15. Issues Ideally, then, we need: A clock/ scheduler, if we’re not calling each agent every time-step (and/or some kind of event-watching system). Some way of randomising sampling of agents/agent locations. Some way of running Monte Carlo sampling of both inputs and parameters. A variety of projections/space types/boundaries.

  16. Helpful Easy I/O. Saving model sequences as video. Connectivity to R, Excel, GISs etc. Options for distributing and describing models. Easy GUI production and visualisation of data. Given all this, the thought that someone might build an agent framework that does all this for you sounds increasingly good of them.

  17. ABM Frameworks What are they? Pieces of software to help people build ABMs. Often offer the functions outlined. Wide range of tools: Pre-written functions. Entire graphical environment. Somewhere in the middle

  18. Why use them? For non-programmers: Graphical “point-and-click” model development. Easier than having to learn a programming language. For programmers: No need to write ‘external’ functionality (e.g. drawing graphs, scheduling events, creating displays). Can concentrate on model logic. Save time (?)

  19. Commonly Used Platforms Netlogo: http://ccl.northwestern.edu/netlogo/ Repast: http://repast.sourceforge.net/ MASON: http://cs.gmu.edu/~eclab/projects/mason/ Ascape: http://ascape.sourceforge.net/ ABLE: http://www.research.ibm.com/able/ Modelling4All: http://www.modelling4all.org/ Agent Analyst: http://www.spatial.redlands.edu/agentanalyst/

  20. Recursive Porous Agent Simulation Toolkit (RePast) Argonne National Laboratory. Based on Swarm. Includes a Logo-based language, ReLogo and graphical programming. Imports NetLogo. Largely Java programmed. Includes 3D GIS using GeoTools. Two main versions: Simphony (Java etc.) RePast for HPC (C++) (MPI based) Based on Eclipse.

  21. Functionality Flexible scheduling including synchronised and event-based scheduling. Randomisation toolkit. Monte Carlo simulation framework. Different spaces and boundaries, including multiple spaces at once.

  22. Functionality Links with R, Weka, GRASS, Pajek. Libraries for genetic algorithms, neural networks, regression, random number generation, and specialized mathematics: http://repast.sourceforge.net/docs/RepastSimphonyFAQ.pdf Exports model shapefiles. Exports applications.

  23. Why not use a framework? Overheads: sometimes there are better ways of doing the same job if that’s all you have to do. For example, RePast’s watch timing is quite heavy but makes sense as a general framework. Constraints: sometimes it is hard to squeeze a model into a Framework’s way of doing things, let alone then move it to a different framework.

  24. ABM: Decision making

  25. Thinking in AI Agent based systems and other AI can contain standard maths etc. But their real power comes from replicating how we act in the real world: assessing situations, reasoning about them, making decisions, and then applying rules. Reasoning: “if a café contains food, and food removes hunger, a café removes hunger” Rules: “if my hunger is high, I should go to a café”

  26. Reasoning Programming languages developed in the late 60’s / early 70’s offered the promise of logical reasoning (Planner; Prolog). These allow the manipulation of assertions about the world: “man is mortal” and “Socrates is a man” leads to “Socrates is mortal” Assertions are usually “triples” of subject-predicate [relationship]-object. There are interfaces for connecting Prolog and Java http://en.wikipedia.org/wiki/Prolog#Interfaces_to_other_languages

  27. Rulesets Most rules are condition-state-action like: “if hunger is high go to café” Normally there’d be a hunger state variable, given some value, and a series of thresholds. A simple agent would look at the state variable and implement or not-implement the associated rule.

  28. How do we decide actions? Ok to have condition-state-action rules like: “if hunger is high go to café” And “if tiredness is high go to bed” But how do we decide which rule should be enacted if we have both? How do real people choose?

  29. Picking rules One simple decision making process is to randomly choose. Another is to weight the rules and pick the rule with the highest weight. Roulette Wheel picking weights rules then picks probabilistically based on the weights using Monte Carlo sampling. How do we pick the weights? Calibration? Do we adjust them with experience? For example, with a GA? We may try and model specific cognitive biases: http://en.wikipedia.org/wiki/List_of_cognitive_biases Anchoring and adjustment: pick an educated or constrained guess at likelihoods or behaviour and adjust from that based on evidence.

  30. Reality is fuzzy Alternatively we may wish to hedge our bets and run several rules. This is especially the case as rules tend to be binary (run / don’t run) yet the world isn’t always like this. Say we have two rules: if hot open window if cold close window How is “hot”? 30 degrees? 40 degrees? Language isn’t usually precise… We often mix rules (e.g. open the window slightly).

  31. Fuzzy Sets and Logic Fuzzy Sets let us say something is 90% “one thing” and 10% “another”, without being illogical. Fuzzy Logic then lets us use this in rules: E.g. it’s 90% “right” to do something, so I’ll do it 90% - opening a window, for example.

  32. Fuzzy Sets We give things a degree of membership between 0 and 1 in several sets (to a combined total of 1). We then label these sets using human terms. Encapsulates terms with no consensus definition, but we might use surveys to define them. 1 Membership function Hot Cold 0.5 Degree of membership 20 0 40 Degrees 17° = 15% cold + 85% hot

  33. Fuzzy Logic models We give our variables membership functions, and express the variables as nouns (“length”, “temperature”) or adjectives (“long”, “hot”). We can then build up linguistic equations (“IF length long, AND temperature hot, THEN openWindow”). Actions then based on conversion schemes for converting from fuzzy percentages of inputs to membership functions of outputs.

  34. Bayesian Networks Of course, it may be that we see people in one state, and their actions, but have no way of replicating the rulesets in human language. In this case, we can generate a Bayesian Network. These gives probabilities that states will occur together. This can be interpreted as “if A then B”. They allow you to update the probabilities on new evidence. They allow you to chain these “rules” together to make inferences.

  35. Bayesian Networks In a Bayesian Network the states are linked by probabilities, so: If A then B; if B then C; if C then D Not only this, but this can be updated when an event A happens, propagating the new probabilities by using the new final probability of B to recalculate the probability of C, etc.

  36. Expert Systems All these elements may be brought together in an “Expert System”. These are decision trees, in which rules and probabilities link states. Forward chaining:you input states and the system runs through the rules to suggest a most useful scenario of action. Backward chaining: you input goals, and the system tells you the states you need to achieve to get there. Don’t have to use Fuzzy Sets or Bayesian probabilities, but often do.

  37. Picking rules However, ideally we want a cognitive framework to embed rule-choice within. Something that embeds decision making within a wider model of thought and existence.

  38. Belief-Desire-Intention We need some kind of reasoning architecture that allows the agents to decide or be driven to decisions. Most famous is the Belief-Desire-Intention model. Beliefs – facts about the world (which can be rules). Desires – things the agent wants to do / happen. Intentions – actions the agent has chosen, usually from a set of plans. Driven by Events, which cascade a series of changes.

  39. Decision making BDI decisions are usually made by assuming a utility function. This might include: whichever desire is most important wins; whichever plan achieves most desires; whichever plan is most likely to succeed; whichever plan does the above, after testing in multiple situations; whichever a community of agents decide on (eg by voting) Desires are goals, rather than more dynamic drivers.

  40. The PECS model Similar model is PECS – more sophisticated as it includes internal drivers: Physis – physical states Emotional – emotional states Cognitive – facts about the world Social status – position within society etc. On the basis of these, the agent plans and picks a behaviour. Ultimately, though, these are decided between by a weighted utility function.

  41. Thinking for agents Ultimately we have to trade off complexity of reasoning against speed of processing. On the one hand, behaviour developed by a GA/GP would be dumb, but fast (which is why it is used to control agents in games). On the other, full cognitive architecture systems like Soar, CLARION, and Adaptive Control of Thought—Rational (ACT-R) are still not perfect, and take a great deal of time to set up.

  42. Further reading Michael Wooldridge (2009) “An Introduction to MultiAgent Systems” Wiley (2nd Edition) MAS architectures, BDI etc. Stuart Russell and Peter Norvig (2010) “Artificial Intelligence: A Modern Approach” Prentice Hall (3rd Edition) Neural nets, Language Processing, etc.

More Related