1 / 31

Field Experiment on Incentive of Communication, Compilation and Evaluation

Field Experiment on Incentive of Communication, Compilation and Evaluation. Motivation for the study What is the Field Experiment Framework of the experiment The flow of the experiment Analysis of Results. Takuya Nakaizumi 2009.6.10. Motivation for the study.

cruz-harmon
Download Presentation

Field Experiment on Incentive of Communication, Compilation and Evaluation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Field Experiment on Incentive of Communication, Compilation and Evaluation • Motivation for the study • What is the Field Experiment • Framework of the experiment • The flow of the experiment • Analysis of Results Takuya Nakaizumi 2009.6.10

  2. Motivation for the study • Does monetary Incentive affect the conversation or editing? • Editors or writers have their own opinion.They will reject the opinion of others on their writing or compilation especially when they do not agree with or do not like it. • Even in such circumstances, incentives may make them edit in a way different from what they would have done otherwise. • In this experiment, we tested whether a monetary incentive affected the editing of a conversation in the BBS.

  3. Evaluation of the editing • To give an editor adequate incentive, we need an evaluation of the performance on which the reward depends • The editing must have a true value. However it is quite difficult to estimate. • Thus in this study we employ a peer review system. • The editing was evaluated by all the participants of the conversation in the BBS. • And the rewards of the editor depended on the evaluation of the participants of the conversation of the BBS.

  4. Evaluation Behavior • Participants of the conversation of the BBS, might appreciate the editing more if the editing attracts the participants. • The reward for the editor might affect just the effort of the editor. The reward itself for the editor might not affect the evaluation from the other participants. • Thus higher rewards lead to more attractive editing. →We check it by experiment

  5. What is a Field Experiment? Harrison and List [2004] : Our primary point is that dissecting the characteristics of field experiments helps define what might be better called an ideal experiment, in the sense that one is able to observe a subject in a controlled setting but where the subject does not perceive any of the controls as being unnatural and there is no deception being practiced. The distinction between the laboratory and the field • In a field experiment, sample populations are drawn from many different domains, while sample is student of ordinal laboratory experiment. • The laboratory environment might not be fully representative of the field environment. c.f. winner’s curse

  6. Determine the Field context of an Experiment Harrison and List [2004] : • the subject (participant) pool, • the information that the subjects bring to the task, • the commodity, • the task or trading rules applied, • the stakes, • the environment in which the subjects operate.

  7. Our field experiment Taxonomy of Field Experiment Harrison and List [2004] : • ‘artefactual’ field experiment:‘non-standard’ subjects, or experimental participants from the market of interest. • ‘framed field experiment’: as the same as an artefactual field experiment but with field context in the commodity, task, stakes, or information set of the subjects, the commodity, • ‘natural field experiment’: ‘natural field experiment’ is the same as a framed field experiment but where the environment is one where the subjects naturally undertake these tasks and where the subjects do not know that they are participants in an experiment.

  8. The Flow of the Experiment 1 • Pre-survey of the topic and attitude of the participants • Discussion on BBS on specific topic • Religion, Politics , Marketing, Finance (Japan only) • Discussion is rewarded if they contribute more than four times. • Choice of the editor of the discussion by Calculating Social Influence Score • Editing of the editor • Evaluation of the editing by the participants and the editor is then rewarded according to the evaluation.

  9. The Flow of the Experiment 2 • Pre-survey • Opinions of the participants on which topic they prefer to discuss. • Variance of the opinions in each domain • Discussion on BBS • Religion, Politics (Marketing, Finance in Japan only) • The most and least conflicting topic based on the variance of opinions in each domain.

  10. The Flow of the Experiment 3 • Calculate (SIS) Social Influence Score • Based on mutual evaluation • Like in Google, the SIS evaluation carries more weight if the evaluator has a higher SIS. • Long tail distribution • Editing/Summarizing • Assigned to the discussant with the highest SIS • Two types of reward for the editing, $20 or $80. • If others’ assessment is low, then the rewards is reduced. “How satisfied are you with the summary by the selected editor? In order to guarantee the quality of the discussion summary, if the editor is rated less than 'not satisfactory' (A 2 out of a scale of 7) on average, then the bonus for editing will be reduced by half.”

  11. Experimental conditions • Domains • Economic: Marketing, Investment (in Japan only) • Non-Economic: Politics, Religions • Difficulties of topics: variance in survey • High: The most conflicting topic • Low: The least conflicting topic • Amount of reward to a editor • High:$80 (¥8,000 in Japan) • Low:$20 (¥2,000 in Japan)

  12. Incentive to discuss and edit • Discussion on BBS • If they post more than four comments, they get the participant fee. • Higher rewards for a editors becomes an incentive for participants to discuss the topic. • Editing of BBS • High:$80 (¥8,000 in Japan) or Low:$20 (¥2,000 in Japan) • Higher rewards let the editor flatter the participants more and that may raise the actual score. • Evaluation of Editing • There is no monetary incentive to evaluate the editing!!

  13. Summary of Experiment DataIn Japan

  14. Summary of Experiment DataIn the U.S.(and combined total)

  15. Basic Model (1) Evaluation depends on the quality of editing by editor 0, that is X0 Assumption 1: i participant’s utility is Thus the evaluation score of i by j, When editor edit the conversation, the editor does not know the evaluation of the other participants. Thus she/he face uncertainty and we describe it by ε, ε〜

  16. Basic Model (2) Quality of editing is assumed to be the function of the effort of editor: X0=X0 (x)=x and reward of the editor, W(s0) is And Cost Function is αc (x), (α>0, c'(x)>0, c" (x)>0, ) Thus editor’s expected benefit of editing, E[B (x)] is

  17. Basic Model(3) Then (1) From There is interior solution of (1) →Proposition 1

  18. Basic Model(4) That means x0 is non decreasing function of w And non-decreasing function of α and Thus and Hypothesis : 1) Higher rewards with easy topic lead to highest score. 2)Lower rewards with difficult topic lead to lowest score. 3)The score of Higher reward with difficult topic and Lower reward with easy topic are between them

  19. Results of Experiment (1): Evaluation Score (1) Difficulty Reward for editing Difference of Topic: 20 80 (p-value of GWT) Easy 0.8890.817 -0.072 1.5356 1.612 (0.1635) 117 131 Difficult 1.127 0.690 -0.437* 1.315 1.577 (0.0017) 142 155 Total 1.019 0.748 -0.271* 1.421 1.592 (0.03) 259 286

  20. Results of Experiment (2): length of editing (effort)

  21. Results of Experiment (3): Analysis of Evaluation Score(1) Ordinal incentive theory Experimental results Higher reward with easy topic lower with difficult 1.127 0.889lower with easy higher reward with difficult topic lower reward with easy topic 0.817higher with easy lower reward with difficult topic 0.748Higher with difficult

  22. Results of Experiment (4): Analysis of Evaluation Score(2) incentive theory with spite Experimental results lower with difficult 1.127 lower reward with easy topic 0.889lower with easy lower reward with difficult topic 0.817higher with easy Higher reward with easy topic 0.748Higher with difficult higher reward with difficult topic

  23. Results of Experiment (5): Analysis of Evaluation Score(3) incentive theory with fairness evaluation Experimental results lower with difficult 1.127 lower reward with difficult topic 0.889lower with easy lower reward with easy topic 0.817higher with easy higher reward with difficult topic Higher reward with easy topic 0.748Higher with difficult

  24. Results of Experiment (6): Evaluation Score (2)

  25. Results of Experiment (7): Evaluation Score of The U.S. Difficulty Reward for editing 20 80 Difference Easy 1.28 1.29 0.01 1.35 1.44 53 65 Difficult 1.14 1.04 0.1 1.48 1.53 69 73 Total 1.20 1.16 0.04 1.42 1.491 122 138

  26. Results of Experiment (8): Evaluation Score of Japan Difficulty Reward for editing 20 80 Difference Easy 0.563 0.348 -0.215 1.553 1.653 41 68 Difficult 1.109 0.378 -0.731* 1.603 1.697 58 62 Total 0.854 0.365 -0.489* 1.578 1.669 99 130

  27. Results of Experiment (9): Evaluation Score (3)

  28. Results of Experiment (10): Evaluation Score of The U.S. Domain Reward for editing 20 80 Difference Religion 0.963 1.429 0.466 1.553 1.325 54 70 Politics 1.397 0.882 -0.515 * 1.603 1.697 68 66 Total 1.205 1.159 -0.046 1.578 1.669 122 138

  29. Analysis of the results Hypothesis is rejected. Then • How the participants evaluate the editing? →Spite or altruistic ? • Valuation of the editing by outsiders or • Can the market value of the editing be estimated? • Evaluation of Evaluators

  30. Possibility of the behavioral evaluation • Suppose the evaluation function depends on not only the quality but also the rewards the editor gets. • The evaluation score is based on both the effort function of the editor and the evaluation function of the other participants as an evaluator.

  31. Future exercise • How the participants evaluate the editing? • Valuation of the editing by outsiders or • Can the market value of the editing be estimated? • Evaluation of Evaluators • Spite or altruistic ?

More Related