1 / 25

Behavioral Experiments with Work Effort: Needing a Piece-Rate Metric

Explore the use of piece-rate metrics in behavioral experiments on work effort, motivation, and reciprocity. Analyze the findings of previous studies and discuss the implications.

agnesr
Download Presentation

Behavioral Experiments with Work Effort: Needing a Piece-Rate Metric

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Behavioral Experiments with Work Effort: Needing a Piece-Rate Metric Stefano DellaVigna UC Berkeley and NBER Based on work with John List, Ulrike Malmendier, Devin Pope, and Gautam Rao SIEPR, 1/25/2019

  2. Motivation: Beh Devo Some of the most exciting developments in behavioral development econ are from experiments set up with work effort • Time preferences: Self-control (Kaur, Kremer, Mullainahan, 2015) • Social preferences: Pay equity (Breza, Kaur, Shamsadani, 2018); Ethnic relations (Hjort, 2013) • Trust and expropriation (Jakiela, Ozier, 2016) • Social networks (Beaman and Magruder, 2012)

  3. Motivation: Exp/Beh Similarly in behavioral/experimental economics, important work with real-effort experiments: • Gender and Performance under competition (Gneezy, Niederle, Rustichini, 2003) • Reference dependence (Abeler et al., 2011; Gill and Prowse, 2012; Goette et al., 2017) • Social preferences: Gift exchange (Gneezy, List, 2006; Marechal, Kube and Puppe, 2012,2013); Intrinsic motivation(Ariely, Bracha, and Meier, 2009) • Time preferences (Augenblick, Niederle, Sprenger, 2015; Augenblick and Rabin, 2018) • Horse race of behavioral motivators (DellaVigna and Pope, 2018a,b)

  4. Let’s consider these experiments through the lens of a simple model.

  5. Simple Model

  6. Simple Model

  7. Simple Model

  8. Example 1. Gift Exchange I • Consider the gift exchange design a la Gneezy and List (2006) • Hire workers for one-time task at a promised wage, no piece rate • Surprise treatment group with gift at beginning of work • Does treatment group work harder due to reciprocity? • Show results for Kube, Marechal and Puppe (2012) • N=63; Treatment groups receives a mug, or money

  9. Example 1. Gift Exchange I • Group with mug exerts 30% higher effort • How much reciprocity accounts for this? • If elasticity of effort is 1 (γ=1), motivation to work for employer should increase by 30% • If elasticity of effort is 0.2 (γ=5), motivation should increase by 150% • If elasticity of effort is 0.1 (γ=10), motivation should increase by 300% • First scenario implies sizable reciprocity • Last scenario implies huge (too large?) reciprocity

  10. Example 2. Gift Exchange II • DellaVigna, List, Malmendier, Rao (2016): similar design, N=446 • Hire workers for one-time task at a promised wage • After 4 hours, surprise treatment group with gift: positive ($), positive (mug), negative (pay lower than expected) • Does treatment group with mug work harder due to reciprocity? • No stat. significant impact, can reject 5% increase in effort vs. control

  11. Gift Exchange, I vs. II • Can we reconcile the two findings? • The two experiments have statistically different effort increases due to treatment • BUT they could have the same underlying increase in motivation if different elasticities • If Marechal et al. task is more elastic to motivation • So, how do we get gamma, and thus the elasticity? • Piece-rate metric design(DellaVigna et al., 2015): • Control group, no piece rate • Treatment group (gift/delay/comparison), no piece rate • Control group with low piece rate • Control group with high piece rate • Two piece rates identify elasticity and baseline motivation

  12. Simple Model

  13. Piece-Rate Design • How common is the piece rate design? • Example 1 above (Marechal et al., 2013): no piece rate • Survey of published papers in QJE-AER-JPE-EMA-RES • 23 studies with real-effort tasks or workplace effort manipulated (e.g, Gneezy et al., 2003; Abeler et al., 2011; Kaur et al., 2015) • How many have piece rate variation? Only 6 out of 23 studies • In how many studies piece rate variation can be used to identify a behavioral effect? 3 papers only • Gill and Prowse (2012) – reference dependence • Augenblick, Niederle, and Sprenger (2015) – identify time preferences • DellaVigna and Pope (2018a,b) – behavioral parameters

  14. Piece-Rate Design • Discuss three papers with piece-rate-metric design • Araujo et al. (2016) – slider task • DellaVigna et al. (2015) – stuffing envelopes • DellaVigna and Pope (2018) – typing task • Effort in all 3: units of output in fixed time • Case 1. Araujo et al. (2016): • Slider task from Gill and Prowse (2012), widely used • Between subject design, 3 piece rates: 0.5c, 2c, 8c, N=148 • Average output by treatment: 26.1, 26.6, 27.3 • Elasticity of 0.025 (!) (γ=40)

  15. Piece-Rate Design • Case 2. DellaVigna et al. (2015): • Workers stuff envelopes for employer (charities) 20 minutes, 8 times in a row • Piece rate varies: 0c, 10c, 20c • Estimated elasticity: 0.1 (γ=10)

  16. Piece-Rate Design • Case 3. DellaVigna and Pope (2018): • Online subjects (Mturk) press a-b keys for 10 minutes • N=10,000 (high statistical power) • Piece rate (per 100 presses): 0c, 1c, 4c, 10c • Elasticity of 0.04

  17. Piece-Rate Design • Three cases  elasticity unlikely to be >> 0.1 • Implication 1a: Some observed treatment effects in previous papers imply very large behavioral impacts • E.g. 30% productivity increase in Marechal et al. (2013) would imply 300%+ increase in motivation • Implication 1b: Null effects consistent with sizable effects • Many of these likely are unpublished (file drawer effect) • Implication 2: Real-effort designs need much larger samples than previously realized for power calculations • Possible saving grace: online samples • Implication 3: Can we find a design with larger elasticity?

  18. Extra-Work Design • Proposed design in DellaVigna et al. (2015): extra work design • Idea from Abeler et al. (2015) • We hire workers for a one-time 2-hour RA coding job for $60 • Apply piece-rate-metric design to staying decision for extra work • After the two hours the randomization takes place: “Thank you for your work today. You have completed the work we hired you for, so here is the $60, as advertised. [Monetary Gift/Non Monetary Gift: In addition, as a token of appreciation, the Becker Center is giving you (Monetary Gift: an additional $15 for helping today. Therefore, we are paying you a total of $75./ Non Monetary Gift: this thermos with a retail value of $15 for helping today.)]” • After a short break, we ask: “If you happen to have some time available, even a few minutes, and are willing to do some extra work, that would be appreciated. Would you be willing to help us enter some more of the data for up to one hour? [Control/MonetaryGift/Non-Monetary Gift/Early Gift: Unfortunately, we cannot compensate you for this extra time.] [M-H Piece Rate: We will pay you [¢25/ ¢50] for every minute of work that you do, up to one hour. For example, if you do an extra 20 minutes of work, we will pay you $5 [$10] extra.]” • Outcome: number of minutes stay (pre-registered)

  19. Extra-Work Design Preliminary results (N=150 out of N=300 planned) Much stronger response to incentives: elasticity of about 0.3 What about gift then?

  20. Extra-Work Design Preliminary results (N=150 out of N=300 planned) What about gift then? Suggestive evidence of a gift effect

  21. Extra-Work Design • Extra-work design provides more stat. power to detect gift exch. • Size of this effect is consistent with null effect using traditional design, given that here more power • Replicate with online Mturk sample in DellaVigna and Pope (2019) • Hire people to code historical data (WWII conscription cards) • When done with 40 cards, ask – “could you do up to 20 more?” • Vary incentives with piece rate metric: oc, o.5c, 2c, 5c per card • Elasticity also of 0.3! Better statistical power

  22. Piece-rate Design • Hence, piece-rate metric design can identify new designs with higher elasticity, -> higher statistical power to capture behavioral patterns • But perhaps within a group of similar-enough designs we can assume that the effort elasticity remains similar. Is it? • DellaVigna and Pope (2019): Run two similar real-effort experiments • 2,500 subjects in each, 10-min task, parallel piece rates • Task A: number of a-b presses in 10 minutes • Task B: number of WWII cards coded in 10 minutes

  23. Piece-rate Design Are the two tasks going to have a similar elasticity? Task A (a-b) Task B (WWII coding) Task A has elasticity of 0.04, low but highly significant Task B has elasticity <<0.01, not significant Hard to predict ex ante  Important to have piece rates

  24. Conclusion Real-effort tasks and work-effort tasks are a really important part of arsenal for development and behavioral economists We argue that a piece-rate-metric design complements existing designs Advantages: Allows for structural estimation of parameters Provides a check if behavioral effect sizes are plausible Helps with evaluation of null effects Helps identify real-effort designs that are more responsive to motivation We hope to see it adopted more widely!

More Related