Repast tutorial ii
This presentation is the property of its rightful owner.
Sponsored Links
1 / 30

RePast Tutorial II PowerPoint PPT Presentation


  • 69 Views
  • Uploaded on
  • Presentation posted in: General

RePast Tutorial II. Today’s agenda. IPD: Experimental dimensions EvolIPD model Random numbers How to build a model (2) Scheduling Homework C. Three crucial questions:. 1. Variation : What are the actors’ characteristics? 2. Interaction : Who interacts with whom, when and where?

Download Presentation

RePast Tutorial II

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Repast tutorial ii

RePast Tutorial II


Today s agenda

Today’s agenda

  • IPD: Experimental dimensions

  • EvolIPD model

  • Random numbers

  • How to build a model (2)

  • Scheduling

  • Homework C


Three crucial questions

Three crucial questions:

1. Variation: What are the actors’ characteristics?

2. Interaction: Who interacts with whom, when and where?

3. Selection: Which agents or strategies are retained, and which are destroyed?

(see Axelrod and Cohen. 1999. Harnessing Complexity)


Experimental dimensions

Experimental dimensions

  • 2 strategy spaces:B, C

  • 6 interaction processes:RWR, 2DK, FRN, FRNE, 2DS, Tag

  • 3 adaptive processes:Imit, BMGA, 1FGA


Soup like topology rwr

“Soup-like” topology: RWR

In each time period, a player interacts

with four other random players.

ATFT

ALLC

ALLD

ALLD

TFT

TFT

ALLC


2d grid topology 2dk

ALLD

ALLD

ALLD

TFT

TFT

ALLC

TFT

ALLC

ATFT

2D-Grid Topology: 2DK

The players are

arranged on a fixed

torus and interact

with four neighbors

in the von-Neumann

neighborhood.


Fixed random network frn

ATFT

TFT

ALLC

ATFT

ALLD

ALLD

TFT

ALLC

TFT

Fixed Random Network: FRN

The players have four

random neighbors

in a fixed random

network. The relations

do not have to be symmetric.


Adaptation through imitation

Adaptation through imitation

Imitation

ATFT

ALLC

ALLD

ALLD

TFT

TFT?

ALLC

Neighbors at t


Adaptation with bmga comparison error prob 0 1

Adaptation with BMGAComparison error (prob. 0.1)

Genetic adaptation

6.0

Fixed spatial

neighborhood

2.8

2.2

9.0

0.8


Bmga continued copy error prob 0 04 per bit

BMGA continued Copy error (prob. 0.04 per “bit”)

Genetic adaptation

6.0

Fixed spatial

neighborhood

p=0; q=0 => p=1; q=0

2.8

6.0

9.0

0.8


Tutorial sequence

Tutorial Sequence

December 7SimpleIPD: strategy space

TodayEvolIPD: RWR

December 21GraphIPD: charts and GUI

GridIPD: 2DK

January 11ExperIPD: batch runs and parameter sweeps


Evolipd flowchart

EvolIPD: flowchart

setup()

buildModel()

resetPlayers()

interactions()

adaptation()

play()

play()

remember()

remember()

addPayoff()

addPayoff()

reportResults()

step()


Markovian vs asynchronous adaptation

Markovian vs. asynchronous adaptation

Markovian

t-1

t

asynchronous


Going sequential

privatevoid stepMarkovian() {

// We carry out four sub-activities:

// Reset the agents' statistics

// Loop through the entire agent list

for (int i = 0; i < numPlayers; i++) {

// Pick the agent

final Player aPlayer = (Player) agentList.get(i);

resetPlayer(aPlayer);

}

// Let them interact with their neighbors

for (int i = 0; i < numPlayers; i++) {

final Player aPlayer = (Player) agentList.get(i);

interactions(aPlayer);

}

// FIRST STAGE OF DOUBLE BUFFERING!

// Let all agents calculate their adapted type first

for (int i = 0; i < numPlayers; i++) {

final Player aPlayer = (Player) agentList.get(i);

adaptation(aPlayer); }

// SECOND STAGE OF DOUBLE BUFFERING!

// Second, once they know their new strategy,

// let them update to the new type

for (int i = 0; i < numPlayers; i++) {

final Player aPlayer = (Player) agentList.get(i);

updating(aPlayer); }

reportResults(); // Report some statistics

}

privatevoid stepAsynchronous() {

// We carry out four sub-activities:

for (int i = 0; i < numPlayers; i++) {

// Pick an agent at random

final Player aPlayer = (Player) agentList.get(this.getNextIntFromTo(0, numPlayers - 1));

// Reset the agent's statistics resetPlayer(aPlayer); // Let it interact with its neighbors

interactions(aPlayer); // Let it adapt

adaptation(aPlayer);

// Let it update its new type

updating(aPlayer);

}

reportResults(); // Report some statistics

}

Going sequential


How to work with random numbers

How to work with random numbers

  • RePast full-fledged random number generator:uchicago.src.sim.util.Random

  • Encapsulates the Colt library random number distributions:http://hoschek.home.cern.ch/hoschek/colt/

  • Each distribution uses the same random number stream, to ease the repeatability of a simulation

  • Every distribution uses the MersenneTwister pseudo-random number generator


Pseudo random numbers

Pseudo-random numbers

  • Computers normally cannot generate real random numbers

  • “Random number generators should not be chosen at random” - Knuth (1986)

  • A simple example (Cliff RNG):

    X0 = 0.1

    Xn+1 = |100 ln(Xn) mod 1|

x1 = 0.25850929940455103

x2 = 0.28236111950289455

x3 = 0.4568461655760814

x4 = 0.3408562751932891

x5 = 0.6294370918024157

x6 = 0.29293640856857195

x7 = 0.7799729122847907

x8 = 0.849608774153694

x9 = 0.29793011540822434

x10 = 0.08963320319223556

x11 = 0.2029456303939412

...


True random numbers

“True” random numbers

  • New service offered by the University of Geneva and the company id Quantique

    http://www.randomnumber.info/

  • No (yet) integrated into RePast


Simple random numbers distribution

Simple random numbers distribution

  • Initialization:Random.setSeed(seed);Random.createUniform();Random.createNormal(0.0, 1.0);

  • Usage:int i = Random.uniform.nextIntFromTo(0, 10);double v1 = Random.normal.nextDouble();double v2 = Random.normal.nextDouble(0.5, 0.3);

Automatically executed by SimpleModel

standard deviation

mean

standard deviation

mean

standard deviation


Available distributions

Beta

Binomial

Chi-square

Empirical (user-defined probability distribution function)

Gamma

Hyperbolic

Logarithmic

Normal (or Gaussian)

Pareto

Poisson

Uniform

Available distributions

Normal

Beta


Custom random number generation

Custom random number generation

  • May be required if two independent random number streams are desirable

  • Bypass RePast’s Random and use the Colt library directly:

    import cern.jet.random.*;import cern.jet.random.engine.MersenneTwister;public class TwoStreamsModel extends SimModel {Normal normal;Uniform uniform;

    publicvoid buildModel() {super.buildModel();MersenneTwister generator1 = new MersenneTwister(123);MersenneTwister generator2 = new MersenneTwister(321);uniform = new Uniform(generator1);normal = new Normal(0.0, 1.0, generator2); }publicvoid step() {int i = uniform.nextIntFromTo(0, 10);double value = normal.nextDouble();}}

seeds


How to build a model 2

How to build a model (2)

  • If more flexibility is desired, one can extend SimModelImpl instead of SimpleModel

  • Differences to SimpleModel

    • No buildModel(), step(), ... methods

    • No agentList, schedule, params, ... fields

    • Most importantly: no default scheduling

  • Required methods:public void setup()public String[] getInitParam()publicvoid begin()public Schedule getSchedule()public String getName()


Simmodelimpl

SimModelImpl

import uchicago.src.sim.engine.Schedule;

import uchicago.src.sim.engine.SimInit;

import uchicago.src.sim.engine.SimModelImpl;

public class MyModelImpl extends SimModelImpl {

public static final int TFT = 1;

public static final int ALLD = 3;

private int a1Strategy = TFT;

private int a2Strategy = ALLD;

private Schedule schedule;

private ArrayList agentList;

public void setup() {

a1Strategy = TFT;

a2Strategy = ALLD;

schedule = new Schedule();

agentList = new ArrayList();

}

public String[] getInitParam() {

returnnew String[]{"A1Strategy"};

}


Simmodelimpl cont

SimModelImpl (cont.)

public String getName() {

return "Example Model";

}publicvoid begin() {

Agenta1 = newAgent(a1Strategy);

Agenta2 = newAgent(a2Strategy);

agentList.add(a1);

agentList.add(a2);

schedule.scheduleActionBeginning(1, this,"step");

}

publicvoid step() {

for (Iterator iterator = agentList.iterator(); iterator.hasNext();) {

Agentagent = (Agent) iterator.next();

agent.play();

}

}

introspection


Simmodelimpl cont1

SimModelImpl (cont.)

public String[] getInitParam() {

returnnew String[]{"A1Strategy"};

}

publicint getA1Strategy() {

returna1Strategy;

}

publicvoid setA1Strategy(intstrategy) {

this.a1Strategy = strategy;

}

publicstaticvoid main(String[] args) {

SimInit init = new SimInit();

SimModelImpl model = new MyModelImpl();

init.loadModel(model, null, false);

}


How to use a schedule

How to use a schedule

  • Schedule object is responsible for all the state changes within a Repast simulation

    schedule.scheduleActionBeginning(1, new DoIt());

    schedule.scheduleActionBeginning(1, new DoSomething());

    schedule.scheduleActionAtInterval(3, new ReDo());

    tick 1: DoIt, DoSomething

    tick 2: DoSomething, DoIt

    tick 3: ReDo, DoSomething, DoIt

    tick 4: DoSomething, DoIt

    tick 5: DoIt, DoSomething

    tick 6: DoSomething, ReDo, DoIt


Different types of actions

Different types of actions

  • Inner class

    class MyAction extends BasicAction {publicvoid execute() {doSomething();}

    }schedule.scheduleActionAt(100, new MyAction());

  • Anonymous inner classschedule.scheduleActionAt(100, new BasicAction(){

    publicvoid execute() {doSomething();}

    );

  • Introspection

    schedule.scheduleActionAt(100, this, "doSomething");


Schedule in simplemodel

Schedule in SimpleModel

publicvoid buildSchedule() {

if (autoStep)

schedule.scheduleActionBeginning(startAt, this,"runAutoStep");

else

schedule.scheduleActionBeginning(startAt, this, "run");

schedule.scheduleActionAtEnd(this, "atEnd");

schedule.scheduleActionAtPause(this, "atPause");

schedule.scheduleActionAt(stoppingTime, this, "stop", Schedule.LAST);

}

public void runAutoStep() {public void run() {

preStep();preStep();

autoStep();step();

postStep();postStep();

} }

private void autoStep() {

if (shuffle)

SimUtilities.shuffle(agentList);

int size = agentList.size();

for (int i = 0;i < size; i++) {

Stepable agent = (Stepable)agentList.get(i);

agent.step();

}

}


Scheduling actions on lists

Scheduling actions on lists

  • An action can be scheduled to be executed on every element of a list:

    publicclass Agent {publicvoid step() {}}schedule.scheduleActionBeginning(1, agentList, "step");

  • is equivalent to:

    publicvoid step() {

    for(Iterator it = agentList.iterator(); it.hasNext();) {

    Agent agent = (Agent) it.next();

    agent.step();

    }

    }schedule.scheduleActionBeginning(1, model, "step");

step() inAgent

step() in SimpleModel


Different types of scheduling

Different types of scheduling

  • scheduleActionAt(double at, …)executes at the specified clock tick

  • scheduleActionBeginning(double begin, …)executes starting at the specified clock tick and every tick thereafter

  • scheduleActionAtInterval(double in, …)executes at the specified interval

  • scheduleActionAtEnd(…)executes the end of the simulation run

  • scheduleActionAtPause(…)executes when a pause in the simulation occurs


Homework c

Homework C

Modify the EvolIPD program by introducing a selection mechanism that eliminates inefficient players. The current adaptation() method should thus be modified such that the user can switch between the old adaptation routine, which relies on strategic learning, and the new “Darwinian” selection mechanism. The selection mechanism should remove the 10% least successful players from the agentList after each round of interaction. To keep the population size constant, the same number of players should be “born” with strategies drawn randomly from the 90% remaining players. Note that because it generates a population-level process, the actual selection mechanism belongs inside the Model class rather than in Player.

Does this change make any difference in terms of the output?


  • Login