1 / 74

Experimental Evidence on Trading Favors in Networks

Experimental Evidence on Trading Favors in Networks. Markus Mobius and Tanya Rosenblat November 2004. What Do We Mean by Social Capital?. Social Capital is the value of social obligations or contacts formed through a social network. Relationships are useful.

Download Presentation

Experimental Evidence on Trading Favors in Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Experimental Evidence on Trading Favors in Networks Markus Mobius and Tanya Rosenblat November 2004

  2. What Do We Mean by Social Capital? • Social Capital is the value of social obligations or contacts formed through a social network. • Relationships are useful. • But maintaining relationships is costly.

  3. What Do We Mean by Social Capital? • Example: May 17 Deep Thought - [ESA Meetings in Amsterdam in June – still need Hotel! Hotels were sold out in March…] • Scenario 1: Have a friend in Amsterdam. Ask if I can stay at her house. • Scenario 2: Have a friend in Brussels. He has a friend in Amsterdam. He asks if I can stay at her house.

  4. Scenario 1: Sabine Tanya

  5. Scenario 1: Request for a favor Sabine Tanya

  6. Scenario 1: Sabine Tanya Tanya Sabine

  7. Scenario 1: Sabine Tanya Tanya Sabine Favor Granted

  8. Why did Sabine do it? • She hopes she can stay in Boston in the future. • She thinks I’ll help her out with something else. • I’ve done many favors for her in the past. • She is just very nice. • Imagine I don’t know Sabine, but just find her name in the phonebook. Will she let me stay at her place?

  9. Scenario 2: Tanya Sabine Alain

  10. Scenario 2: Request for a favor Request relayed Tanya Sabine Alain

  11. Scenario 2: Tanya Sabine Alain Tanya Alain Sabine Favor given to Alain

  12. Scenario 2: Tanya Sabine Alain Tanya Alain Sabine Favor done for Tanya Favor done for Alain

  13. Why did Alain do it? • He knows that I won’t destroy Sabine’s apartment. • He thinks I’ll help him out with something else. • I’ve done many favors for him in the past. • He is just very nice. • Why did Sabine do it? • She thinks Alain will help her out with something else. • Alain has done many favors for her in the past. • She is just very nice.

  14. Social Capital (Putnam’s Definition) • Social capital refers to the collective value of all “social networks” [who people know] and the inclinations that arise from these networks to do things for each other [“norms of reciprocity”]

  15. Social Capital (Putnam’s Definition) • Social capital works through multiple channels: • information flows (e.g. learning about jobs) • norms of reciprocity (mutual aid) • Bonding networks that connect folks who are similar sustain particularized (in-group) reciprocity. • Bridging networks that connect individuals who are diverse sustain generalized reciprocity.

  16. Motivation • non-market interactions are important (Prendergast and Stole 1999; Granovetter 1973) • how are favors paid? what ‘currency’? • long-term bilateral relationships; reciprocity

  17. Motivation • often no double coincidence of wants across time: frequent within-group interaction but infrequent interaction with any particular agent (Granovetter’s weak links) • group information (institutions; gossip) • networks as monitoring mechanisms

  18. Main Questions • Are there cooperative “network” equilibria? (YES) • Do agents actually choose to play these cooperative equilibria? => Experimental Framework

  19. Theoretical Intuition • Large number of agents, continuous time • Alarm clocks go off independently at rate 1 and then an agent needs a good (e.g., information about jobs) • Share p of agents can provide the good (every agent other than the one with the need can provide with probability p) • Ex: p=5%; 1000 agents; 50 people can do it. • Helping costs c but gives benefit b>c => there are benefits from trade

  20. Theoretical Intuition • Similar to Prisoner’s Dilemma • I help you now in a repeated game environment because you will help me in the future => bilateral networks based on reciprocity. • However, if we interact infrequently cooperation is hard to sustain. • Kandori (1992) • low frequency interaction through random matching (can “help” each other only infrequently) • cooperation can be sustained through contagious punishment • cooperation breaks down as N becomes large • Group punishments help.

  21. Theoretical Intuition • Need information aggregation for group punishment • Global Image Scores as a Particular Aggregation Mechanism Sigmund and Nowak (1998) • image scores allow group punishments • global image scores are memory intensive as N becomes large • Money as a Particular Image Score Kocherlakota (1998): money as memory

  22. Theoretical Intuition – Network Mechanism to Sustain Cooperation • Networks with weak links • Local Image Scores – I only have information about behavior of my circle of friends. • Can only punish my friends. • Ex. Everyone has 4 friends but no friends in common.

  23. Weak Link Network with 4 friends – everyone has 4 friends but no friends in common

  24. If Agent 2 owes a favor to Agent 1, we say 1 has an open link to 2.

  25. Theoretical Intuition – Network Mechanism to Sustain Cooperation • Every link is either open or closed with probability ½ at the beginning of time. • How much is an open link worth to me? Out of my 3 friends 1 ½ owe me favors. If I ask you for a favor, 1½ of your friends owe you a favor. => The number of owed favors increases exponentially => eventually somebody can grant my request for sure.

  26. Agent 1 needs help. Agents 2 and 5 owe Agent 1 favors. 6 owes to 5; 7 owes to 6; 8 owes to 7; 8 can do it. 8 Can do it 6 7

  27. 1 asks 5; 5 asks 6; 6 asks 7; 7 asks 8; 8 does it. 8 Can do it 6 7

  28. Note that 1 doesn’t know 8! Why does 8 do it? – He values his relationship with 7. 8 Can do it 6 7

  29. Theoretical Intuition – Network Mechanism to Sustain Cooperation • Why wouldn’t I invest in infinitely many links? • Ex. If I have 100 friends who owe favors to me, I won’t have any need to reciprocate favors – I’ll spend all the time consuming the links. • So nobody wants to be friends with me in the first place.

  30. Theoretical Intuition – Network Mechanism to Sustain Cooperation • Can show that there is a network equilibrium with relaying requests for help in which cooperation can be sustained. • Next Step: Design an experiment to see if agents choose to play this equilibrium.

  31. Related Experimental Work • We know that there is a lot of cooperation in the lab when theory suggests there should be none • Examples of direct reciprocity include • Prisoner’s Dilemma (Andreoni and Miller,1993; Cooper, DeJong, Forsythe, and Ross (1996)) • centipede game (McKelvey and Palfrey,1992) • public goods game (Croson, 1998) • investment game (Berg, Dickhaut and McCabe, 1995) • employer/ employee relationships (Fehr, Gachter and Kirchsteiger, 1997; Fehr and Gachter, 1998)

  32. Related Experimental Work • Experimental Results on Indirect Reciprocity = “How should I treat you if I know how you treated somebody else?” • In one shot games: investment game (Dufwenberg, Gneezy, Guth and van Damme, 2000; Buchan, Croson, and Dawes, 2001) • In repeated interactions - helping games with global image scores • Wedekind and Milinksi (2000), • Seinen and Schram (2000) and Bolton, Ockenfels, Katok and Huck (2001): vary amount of image score information available to donors

  33. Related Experimental Work • Main results: • some baseline altruism • strategic indirect reciprocity -evidence that agents can learn to cooperate using global image scores • Design an experiment with networks and local image scores.

  34. Experiment • helping game where subjects choose whom to ask • agents are not always able to help => no small cliques • only local information (cannot observe others’ interactions) • two treatments: direct and indirect games

  35. Direct Game: 10 players who can send (direct) messages to each other. No referrals allowed.

  36. Indirect Game: Weak link network that connects you to all other players. Can send direct messages and forward messages received. 8 7 6 9 5 10

  37. Indirect Game: Weak link network that connects you to all other players. Can send direct messages and forward messages received. 8 7 6 9 5 10

  38. Experiment • Conducted in Argentina from August 2002 to April 2003 • Instructions in Spanish (reverse translations) • Mediated in points with exchange rate 100 points = 0.40 Pesos • Flat participation fee of 12 pesos • Experiment lasted 1 – 1 ½ hours • Average hourly wage for college students at the time 6-10 Pesos/hr • Each subject participated in 2 sessions of several rounds in length.

  39. Experiment • a probabilistic ending time for every round – resulting average length 14 rounds • initial account: 1500 Points • message cost: 2 Points • benefit of getting the good: 300 Points • cost of giving the good: 200 Points

  40. Experiment • Messages are costly to avoid meaningless messaging and to put a limit on messaging because new round starts only after all messages have been dealt with. • 50 goods; each player can give 5 out of 50 goods • When good is not given a player cannot tell whether this is because the other player could not give or chose not to give. • In the indirect treatment, when the good is granted, the player never finds out about who along the chain granted it. • Parameters are set such that theoretically cooperation cannot be sustained in direct game; there is a network equilibrium for indirect game

  41. Timing – Direct Game In each round: Subjects learn about the one good they need and the five goods they can produce. Subjects choose the recipient(s) of messages and send messages. Receivers of messages send responses. If they can give the good, they have to choose either to give or to ignore request. The round is over when there are no outstanding messages

  42. Timing – Direct Game In each round: Subjects learn about the one good they need and the five goods they can produce. Subjects choose the recipient(s) of messages and send messages. Receivers of messages send responses. If they can give the good, they have to choose either to give or to ignore request. The round is over when there are no outstanding messages The need and production abilities change every round.

  43. Timing – Direct Game In each round: Subjects learn about the one good they need and the five goods they can produce. Subjects choose the recipient(s) of messages and send messages. Receivers of messages send responses. If they can give the good, they have to choose either to give or to ignore request. The round is over when there are no outstanding messages The need and production abilities change every round. Sending a message costs 2 points.

  44. Timing – Direct Game In each round: Subjects learn about the one good they need and the five goods they can produce. Subjects choose the recipient(s) of messages and send messages. Receivers of messages send responses. If they can give the good, they have to choose either to give or to ignore request. The round is over when there are no outstanding messages Recipients of goods never find out why they did not get the good. The need and production abilities change every round. Sending a message costs 2 points.

  45. Timing – Direct Game In each round: Subjects learn about the one good they need and the five goods they can produce. Subjects choose the recipient(s) of messages and send messages. Receivers of messages send responses. If they can give the good, they have to choose either to give or to ignore request. The round is over when there are no outstanding messages Agents can send as many messages as they want if they have points to pay for them. Recipients of goods never find out why they did not get the good. The need and production abilities change every round. Sending a message costs 2 points.

  46. Timing – Indirect Game In each round: The round is over when there are no outstanding messages Subjects learn about the one good they need and the five goods they can produce. Subjects choose the recipient(s) of messages and send messages. Receivers of messages send responses. They always have an option to refer request to someone else. If they can give the good, they have to choose either to give, to ignore request or to refer request.

  47. Subjects • 89 subjects from University of Tucuman • variety of majors • 4 direct and 5 indirect games of 2 sessions each

  48. Subjects • 89 subjects from University of Tucuman • variety of majors • 4 direct and 5 indirect games of 2 sessions each Income Proxy

  49. Hypotheses: • H1: The probability that a needed good is provided in the indirect treatment is larger than the probability that a needed good is provided in the direct treatment. Therefore, average earnings should be higher in the indirect treatment.

  50. Hypotheses: • H2: The probability that a givable request is granted is greater in the indirect treatment. Note that H2 is not a consequence of H1, because in indirect game messaging is less efficient: agent might not relay a message in which case it doesn’t reach a player who could have provided the good. So possible that more givable requests are granted in the indirect game but earnings are lower nevertheless.

More Related