programming for social scientists lecture 4 n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Programming for Social Scientists Lecture 4 PowerPoint Presentation
Download Presentation
Programming for Social Scientists Lecture 4

Loading in 2 Seconds...

play fullscreen
1 / 35

Programming for Social Scientists Lecture 4 - PowerPoint PPT Presentation


  • 93 Views
  • Uploaded on

Programming for Social Scientists Lecture 4. UCLA Political Science 209-1: Programming for Social Scientists Winter 1999 Lars-Erik Cederman & Benedikt Stefansson. Exercise 1b. int matrix[2][2] = {{3,0},{5,1}}; @implementation Player ... -setRow: (int) r Col: (int) c { if (rowPlayer) {

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Programming for Social Scientists Lecture 4' - oren-knox


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
programming for social scientists lecture 4

Programming for Social ScientistsLecture 4

UCLA Political Science 209-1: Programming for Social Scientists

Winter 1999

Lars-Erik Cederman & Benedikt Stefansson

exercise 1b
Exercise 1b

int matrix[2][2] = {{3,0},{5,1}};

@implementation Player

...

-setRow: (int) r Col: (int) c {

if (rowPlayer) {

row = r;

col = c;

} else {

row = c;

col = r;

}

return self;

}

-(BOOL)move {

return matrix[!row][col] > matrix[row][col];

}

POL SCI 209-1 Cederman / Stefansson

exercise 1c
Exercise 1c

int matrix[2][2][2] = {{{3,0},{5,1}},

{{3,1},{5,0}}};

@implementation Player

-init: (int)n rowPlayer: (BOOL)rp playerType: (int)pt {

name = n;

rowPlayer = rp;

playerType = pt;

return self;

}

...

-(BOOL)move {

return matrix[playerType][!row][col] >

matrix[playerType][row][col];

}

POL SCI 209-1 Cederman / Stefansson

exercise 1c cont d
Exercise 1c (cont'd)

player1 = [Player create: globalZone];

player2 = [Player create: globalZone];

for (pt=0; pt<2; pt++) {

[player1 init: 1 rowPlayer: YES playerType: pt];

[player2 init: 2 rowPlayer: NO playerType: pt];

for (r=0; r<2; c++) {

printf("+---+---+\n");

printf("|");

for (c=0; c<2; c++) {

[player1 setRow: r Col: c];

[player2 setRow: r Col: c];

if ([player1 move] !! [player2 move])

printf(" |");

else

printf(" * |");

}

printf("\n");

}

}

printf("+---+---+\n");

POL SCI 209-1 Cederman / Stefansson

exercise 2 player m
@implementation Player

-init: (int) n {

name = n;

alive = YES;

return self;

}

-setOther: o {

other = o;

return self;

}

-(BOOL)isAlive {

return alive;

}

play: r {

int shot;

[r load];

shot = [r trigger];

if (shot)

alive = NO;

else

[other play: r];

return self;

}

@end

Exercise 2: Player.m

POL SCI 209-1 Cederman / Stefansson

exercise 2 revolver m
Exercise 2: Revolver.m

...

#import <stdlib.h>

@implementation Revolver

-empty {

bullets = 0;

return self;

}

-load {

bullets++;

return self;

}

-(BOOL)trigger {

return (double)rand()/(double)RAND_MAX < bullets/6.0;

}

@end

POL SCI 209-1 Cederman / Stefansson

prisoner s dilemma game
Prisoner's Dilemma Game

Player 2

C D

C 3,3 0,5

Player 1

D 5,0 1,1

POL SCI 209-1 Cederman / Stefansson

iterated prisoner s dilemma
Iterated Prisoner's Dilemma
  • repetitions of single-shot PD
  • "Folk Theorem" shows that mutual cooperation is sustainable
  • In The Evolution of Cooperation, Robert Axelrod (1984) created a computer tournament of IPD
    • cooperation sometimes emerges
    • Tit For Tat a particularly effective strategy

POL SCI 209-1 Cederman / Stefansson

one step memory strategies
One-Step Memory Strategies

Strategy = (i, p, q)

i = prob. of cooperating

at t = 0

p = prob. of cooperating

if opponent cooperated

q = prob. of cooperating

if opponent defected

C

p

Memory:

C

D

q

C

D

D

t

t-1

POL SCI 209-1 Cederman / Stefansson

the four strategies cf cohen et al p 8
The Four Strategies(cf. Cohen et al. p. 8)

POL SCI 209-1 Cederman / Stefansson

a four iterations pd
A four-iterations PD

U

+

U

+

U

+

U

= S

{C,D}

i

Row

p,q

Column

i

{C,D}

U

U

+

U

+

U

+

= S

0

1

2

3

4

t

POL SCI 209-1 Cederman / Stefansson

all d meets tft

D

D

D

D

D

D

all-D meets TFT

Cumulated

Payoff

p=q=0

0

+

1

+

1

+

1

= 3

i=0

Row

(all-D)

D

D

D

Column

(TFT)

C

i=1

1

5

+

1

+

1

+

= 8

p=1; q=0

0

1

2

3

4

t

POL SCI 209-1 Cederman / Stefansson

moves and total payoffs for all 4 x 4 strategy combinations
Moves and Total Payoffs for all4 x 4 Strategy Combinations

Source: Cohen et al. Table 3, p. 10

POL SCI 209-1 Cederman / Stefansson

simpleipd file structure
simpleIPD: File structure

ModelSwarm.h

Player.h

main.m

Player.m

ModelSwarm.m

POL SCI 209-1 Cederman / Stefansson

simpleipd main m
simpleIPD: main.m

int main(int argc, const char ** argv) {

id modelSwarm;

initSwarm(argc, argv);

modelSwarm = [ModelSwarm create: globalZone];

[modelSwarm buildObjects];

[modelSwarm buildActions];

[modelSwarm activateIn: nil];

[[modelSwarm getActivity] run];

return 0;

}

POL SCI 209-1 Cederman / Stefansson

the modelswarm
The ModelSwarm
  • An instance of the Swarm class can manage a model world
  • Facilitates the creation of agents and interaction model
  • Model can have many Swarms, often nested

main

ModelSwarm

Player1

Player2

POL SCI 209-1 Cederman / Stefansson

simpleipd modelswarm h
simpleIPD: ModelSwarm.h

...

@interface ModelSwarm: Swarm {

id player1,player2;

int numIter;

id stopSchedule, modelSchedule, playerActions;

}

+createBegin: (id) aZone;

-createEnd;

-updateMemories;

-distrPayoffs;

-buildObjects;

-buildActions;

-activateIn: (id) swarmContext;

-stopRunning;

@end

POL SCI 209-1 Cederman / Stefansson

creating a swarm
Creating a Swarm

I. createBegin,createEnd

  • Initialize memory and parameters

II. buildObjects

  • Build all the agents and objects in the model

III. buildActions

  • Define order and timing of events

IV. activate

  • Merge into top level swarm or start Swarm running

POL SCI 209-1 Cederman / Stefansson

step i initializing the modelswarm
int matrix[2][2]={{1,5},{0,3}};

@implementation ModelSwarm

+createBegin: (id) aZone {

ModelSwarm * obj;

obj = [super createBegin:aZone];

return obj;

}

-createEnd {

return [super createEnd];

}

Step I: Initializing the ModelSwarm

4

1

2

3

POL SCI 209-1 Cederman / Stefansson

details on createbegin method
The “+” indicates that this is a class method as opposed to “-” which indicates an instance method

ModelSwarm * obj

indicates to compiler that obj is statically typed to ModelSwarm class

[super ...]

Executes createBegin method in the super class of obj (Swarm) and returns an instance of ModelSwarm

Details on createBegin method

1

3

2

POL SCI 209-1 Cederman / Stefansson

memory zones
Memory zones

4

  • The Defobj super class provides facilities to create and drop an object through
  • In either case the object is created “in a memory zone”
  • Effectively this means that the underlying mechanism provides enough memory for the instance, it’s variables and methods.
  • The zone also keeps track of all objects created in it and allows you to reclaim memory simply by dropping a zone. It will signals to all objects in it to destroy themselves.

POL SCI 209-1 Cederman / Stefansson

where did that zone come from
In main.m : initSwarm (argc, argv);Where did that zone come from?

Executes various

functions in defobj and simtools

which create a global memory zone

among other things

In main.m: modelSwarm= [ModelSwarm create: globalZone];

create: method is implemented

in defobj, superclass of the Swarm class

and it calls the createBegin:

method in ModelSwarm

In ModelSwarm.m: +createBegin:

POL SCI 209-1 Cederman / Stefansson

step ii building the agents
-buildObjects {

player1 = [Player createBegin: self];

[player1 initPlayer];

player1 = [player1 createEnd];

player2 = [Player createBegin: self];

[player2 initPlayer];

player2 = [player2 createEnd];

[player1 setOtherPlayer: player2];

[player2 setOtherPlayer: player1];

return self;

}

Step II: Building the agents

POL SCI 209-1 Cederman / Stefansson

details on the buildobjects phase
Details on the buildObjects phase
  • The purpose of this method is to create each instance of objects needed at the start of simulation, and then to pass parameters to the objects
  • It is good OOP protocol to provide setX: methods for each parameter X we want to set, as in: [player1 setOtherPlayer: player2]

POL SCI 209-1 Cederman / Stefansson

why createbegin vs create
Why createBegin vs. create?
  • Using createBegin:, createEnd is appropriate when we want a reminder that the object needs to initialize something, calculate or set (usually this code is put in the createEnd method).
  • Always use createBegin with createEnd to avoid messy problems
  • But create: is perfectly fine if we just want just to create an object without further ado.

POL SCI 209-1 Cederman / Stefansson

simpleipd modelswarm m cont d
simpleIPD: ModelSwarm.m (cont'd)

-updateMemories {

[player1 remember];

[player2 remember];

return self;

}

-distrPayoffs {

int action1, action2;

action1 = [player1 getNewAction];

action2 = [player2 getNewAction];

[player1 setPayoff: [player1 getPayoff] +

matrix[action1][action2]];

[player2 setPayoff: [player2 getPayoff] +

matrix[action2][action1]];

return self;

}

POL SCI 209-1 Cederman / Stefansson

simpleipd player h
simpleIPD: Player.h

@interface Player: SwarmObject {

int time, numIter;

int i,p,q;

int cumulPayoff;

int memory;

int newAction;

id other;

}

-initPlayer;

-createEnd;

-setOtherPlayer: player;

-setPayoff: (int) p;

-(int)getPayoff;

-(int)getNewAction;

-remember;

-step;

@end

POL SCI 209-1 Cederman / Stefansson

simpleipd player m
@implementation Player

-initPlayer {

time = 0;

cumulPayoff = 0;

i = 1; // TFT

p = 1;

q = 0;

newAction = i;

return self;

}

-createEnd {

[super createEnd];

return self;

}

-setOtherPlayer: player {

other = player;

return self;

}

-setPayoff: (int) payoff {

cumulPayoff = payoff;

return self;

}

-(int) getPayoff {

return cumulPayoff;

}

-(int) getNewAction {

return newAction;

}

-remember {

memory = [other getNewAction];

return self;

}

-step {

if (time==0)

newAction = i;

else {

if (memory==1)

newAction = p;

else

newAction = q;

}

time++;

return self;

}

simpleIPD: Player.m

POL SCI 209-1 Cederman / Stefansson

step iii building schedules
Step III: Building schedules

-buildActions {

stopSchedule = [Schedule create: self];

[stopSchedule at: 12 createActionTo: self message: M(stopRunning)];

modelSchedule = [Schedule createBegin: self];

[modelSchedule setRepeatInterval: 3];

modelSchedule = [modelSchedule createEnd];

playerActions = [ActionGroup createBegin: self];

playerActions = [ActionGroup createEnd];

[playerActions createActionTo: player1 message: M(step)];

[playerActions createActionTo: player2 message: M(step)];

[modelSchedule at: 0 createActionTo: self message:M(updateMemories)];

[modelSchedule at: 1 createAction: playerActions];

[modelSchedule at: 2 createActionTo: self message: M(distrPayoffs)];

return self;

}

POL SCI 209-1 Cederman / Stefansson

schedules
Schedules

t

t+1

t+2

  • Schedules define event in terms of:
    • Time of first invocation
    • Target object
    • Method to call

[m distribute]

[m update]

1

2

3

3

2

1

[schedule at: t createActionTo: agent message: M(method)]

POL SCI 209-1 Cederman / Stefansson

actiongroups
ActionGroups

t=1

t=2

t=3

  • Group events at same timestep
  • Define event in terms of:
    • Target object
    • Method to call

[m distribute]

[m update]

[p1 step]

[p2 step]

2

3

3

2

[actionGroup createActionTo: agent message: M(method)]

POL SCI 209-1 Cederman / Stefansson

implementation
Implementation

schedule=[Schedule createBegin: [self getZone]];

[schedule setRepeatInterval: 3];

schedule=[schedule1 createEnd];

[schedule at: 1 createActionTo: m message: M(update)];

[schedule at: 3 createActionTo: m message: M(distribute)];

actionGroup=[ActionGroup createBegin: [self getZone]];

[actionGroup createEnd];

[actionGroup createActionTo: p1 message: M(step)];

[actionGroup createActionTo: p2 message: M(step)];

[schedule at: 2 createAction: actionGroup];

t

t+1

t+2

t+3

t+4

...

POL SCI 209-1 Cederman / Stefansson

step iv activating the swarm
Step IV: Activating the Swarm

-activateIn: (id) swarmContext {

[super activateIn: swarmContext];

[modelSchedule activateIn: self];

[stopSchedule activateIn: self];

return [self getActivity];

}

-stopRunning {

printf("Payoffs: %d,%d\n",[player1 getPayoff],

[player2 getPayoff]);

[[self getActivity] terminate];

return self;

}

POL SCI 209-1 Cederman / Stefansson

activation of schedule s
Activation of schedule(s)

There is only one

Swarm so we activate

it in nil

In main.m:[modelSwarm activateIn: nil];

This one line could set

in motion complex scheme

of merging and activation

-activateIn: (id) swarmContext

[modelSchedule activateIn: self]

POL SCI 209-1 Cederman / Stefansson

previous example as a for loop
Previous example as a for loop

for(t=1;t<4;t++) {

[self updateMemories];

[player1 step];

[player2 step];

[self distrPayoffs];

}

[self stopRunning];

POL SCI 209-1 Cederman / Stefansson