1 / 34

Proper Scoring Rules and Prospect Theory

Joep Sonnemans. Theo Offerman. Gijs van de Kuilen. Peter P. Wakker. Proper Scoring Rules and Prospect Theory. August 20, 2007 SPUDM, Warsaw, Poland. Topic: Our chance estimates of Hillary Clinton to become next president of the US.

pereza
Download Presentation

Proper Scoring Rules and Prospect Theory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Joep Sonnemans Theo Offerman Gijs van de Kuilen Peter P. Wakker Proper Scoring Rules and Prospect Theory August 20, 2007SPUDM, Warsaw, Poland Topic:Our chance estimates of Hillary Clinton to become next president of the US. H: Hillary will win. not-H: someone else.

  2. 2 Proper scoring rules:beautiful tool to measure subjective probabilities ("read minds"; see later). Developed in 1950s. Based on theory of those days: expected value. Still widely applied today; never been updated yet … (prospect theory …); such updating is getting high time! Mutual benefits: Proper scoring rules <==> Prospect theory

  3. 3 Proper scoring rules explained: (Suggestion for whole lecture: don't read the algebra; numerical examples will clarify.) You choose 0  r  1, as you like.We call r your reported probability of H (Hillary president). You receive following prospect H not-H  1 – (1– r)2 1 – r2 What r should you choose?

  4. 4 First assume EV. After some algebra: p true subjective probability;optimizing p(1 – (1– r)2) + (1–p) (1–r2);1st order condition 2p(1–r) – 2r(1–p) = 0;r = p. Optimal r = your true subjective probability of Hillary winning. Easy in algebraic sense. Conceptually: !!! Wow !!! de Finetti (1962) and Brier (1950) were the first neuro-scientists!

  5. 5 "Bayesian truth serum" (Prelec, Science, 2005). Superior to elicitations through preferences . Superior to elicitations through indifferences ~ (BDM). Widely used: Hanson (Nature, 2002), Prelec (Science 2005). In accounting (Wright 1988), Bayesian statistics (Savage 1971), business (Stael von Holstein 1972), education (Echternacht 1972), medicine (Spiegelhalter 1986), psychology (Liberman & Tversky 1993; McClelland & Bolger 1994), experimental economics (Nyarko & Schotter 2002). Remember: based on expected value! We want to introduce these very nice features into prospect theory.

  6. 6 Survey Part I. Theoretical Analysis. Part II. Theoretical Analysis "reversed." Part III. Implementation in an experiment.

  7. 7 Part I. Deriving r from Theories ------------------------------------------------------- EV (already done) & 3 deviations. ------------------------------------------------------- Two deviations from EV regarding risk attitude (that we want to correct for): 1. Utility curvature; 2. Probability weighting; ------------------------------------------------------- Third deviation concerning subjective beliefs (that we want to measure): 3. Nonadditive beliefs and ambiguity aversion.

  8. 8 Let us assume that you strongly believe in Hillary (charming husband …) Your "true" subj. prob.(H) = 0.75. Before turning to deviations, graph for EV. EV: Then your optimal rH = 0.75.

  9. 9 1 R(p) rEV rEU 0.69 0.61 rnonEU 0.75 0.50 rnonEUA nonEU EU 0.25 EV 0 1 0 0.25 0.50 0.75 p go to p. 12, Example EU go to p. 20, Example nonEUA go to p. 16, Example nonEU next p. Reported probability R(p) = rH as function of true probability p, under: (a) expected value (EV); (b) expected utility with U(x) = x (EU); (c) nonexpected utility for known probabilities, with U(x) = x0.5 and with w(p) as common; rnonEUA: nonexpected utility for unknown probabilities ("Ambiguity").

  10. 10 So far we assumed EV (as does everyone using proper scoring rules, but as does no decision theorist in SPUDM ...) Deviation 1 from EV: EU with U nonlinear Now optimize pU(1 – (1– r)2) + (1 – p)U(1 – r2)

  11. p r = (1–p) p + U´(1–r2) U´(1–r2) U´(1–(1–r)2) U´(1–(1–r)2) r p = (1–r) r + 11 Reversed (and explicit) expression:

  12. go to p. 9, with figure of R(p) 12 How bet on Hillary? [Expected Utility]. EV: rEV = 0.75. Expected utility, U(x) = x: rEU = 0.69. You now bet less on Hillary. Closer to safety (Winkler & Murphy 1970).

  13. 13 Deviation 2from EV: nonexpected utility for given probabilities (Allais 1953, Machina 1982, Kahneman & Tversky 1979, Quiggin 1982, Schmeidler 1989, Gilboa 1987, Gilboa & Schmeidler 1989, Gul 1991, Levy-Garboua 2001, Luce & Fishburn 1991, Tversky & Kahneman 1992, Birnbaum 2005) Fortwo-gainprospects, virtually all those theories areas follows: For r  0.5, nonEU(r) = w(p)U(1 – (1–r)2) + (1–w(p))U(1–r2). r < 0.5, symmetry; soit! Different treatment of highest and lowest outcome: "rank-dependence."

  14. 1 w(p) .51 1/3 1 0 1/3 p 2/3 Figure.The common weighting function w. w(p) = exp(–(–ln(p))) for  = 0.65. w(1/3)  1/3; w(2/3)  .51 14

  15. w(p) r = (1–w(p)) w(p) + U´(1–r2) U´(1–r2) U´(1–(1–r)2) U´(1–(1–r)2) Reversed (explicit) expression: ) ( r w–1 p = (1–r) r + 15 Now

  16. go to p. 9, with figure of R(p) 16 How bet on Hillary now? [nonEU with probabilities]. EV: rEV = 0.75. EU: rEU = 0.69. Nonexpected utility, U(x) = x, w(p) = exp(–(–ln(p))0.65). rnonEU = 0.61. You bet even less on Hillary. Again closer to safety. Deviations were at level of behavior so far, not of be-liefs. Now for something different; more fundamental.

  17. 17 Deviation 3 from EV: Nonadditive Beliefs and Ambiguity. Of different nature than previous two. Not to correct for, but the thing to measure. Unknown probabilities; ambiguity = belief/decision-attitude? (Yet to be settled). How deal with unknown probabilities? Have to give up Bayesian beliefs descriptively. According to some even normatively.

  18. Instead of additive beliefs p = P(H), nonadditive beliefs B(H) (Dempster&Shafer, Tversky&Koehler, etc.) 18 All currently existing decision models: For r  0.5, nonEU(r) = w(B(H))U(1–(1–r)2) + (1–w(B(H)))U(1–r2).Don't recognize? Is just '92 prospect theory = Schmeidler (1989)! Write W(H) = w(B(H)).Can always writeB(H) = w–1(W(H)). For binary gambles: Pfanzagl 1959; Luce ('00 Chapter 3); Ghirardato & Marinacci ('01, "biseparable").

  19. w(B(H)) rH = (1–w(B(H))) w(B(H)) + Reversed (explicit) expression: U´(1–r2) U´(1–r2) ) U´(1–(1–r)2) U´(1–(1–r)2) ( w–1 B(H) = r (1–r) r + 19

  20. go to p. 9, with figure of R(p) 20 How bet on Hillary now?[Ambiguity, nonEUA]. rEV = 0.75. rEU = 0.69. rnonEU = 0.61 (under plausible assumptions). Similarly, rnonEUA = 0.52. r's are close to insensitive "fifty-fifty." "Belief" component B(H) = w–1(W) = 0.62.

  21. 21 Our contribution: through proper scoring rules with "risk correction" we can easily measure B(H). Debates about interpretation ofB(H): ambiguity attitude /=/ beliefscan come later, and we do not enter. We come closer to beliefs than traditional analyses of proper scoring rules, that completely ignore all deviations from EV.

  22. ) ( w–1 p = We reconsider reversed explicit expressions: ) ( U´(1–r2) U´(1–r2) w–1 B(H) = U´(1–(1–r)2) U´(1–(1–r)2) r r (1–r) (1–r) r + r + Part II. Deriving Theoretical Things from Empirical Observations of r 22 Corollary.p = B(H) if related to the same r!!

  23. stock 20, CSM certificates dealing in sugar and bakery-ingredients.Reported probability: r = 0.75 91 91 Example (participant 25) 23 For objective probability p = 0.70, also reported probability r = 0.75. Conclusion:B(elief) of ending in bar is 0.70! We simply measure the R(p) curves, and use their inverses: is risk correction.

  24. 24 Our proposal takes the best of several worlds! Need not measure U,W, and w. Get "canonical probability" without measuring indifferences (BDM …; Holt 2006). Calibration without needing many repeated observations. Do all that with no more than simple proper-scoring-rule questions.

  25. 25 Directly implementable empirically. We did so in an experiment, and found plausible results.

  26. 26 Part III. Experimental Test of Our Correction Method

  27. 27 Method Participants. N = 93 students. Procedure. Computarized in lab. Groups of 15/16 each. 4 practice questions.

  28. 28 Stimuli 1. First we did proper scoring rule for unknown probabilities. 72 in total. For each stock two small intervals, and, third, their union. Thus, we test for additivity.

  29. 29 Stimuli 2. Known probabilities: Two 10-sided dies thrown. Yield random nr. between 01 and 100. Event E: nr.  75 (p = 3/4 = 15/20) (etc.). Done for all probabilities j/20. Motivating subjects. Real incentives. Two treatments/conditions. 1. All-pay. Points paid for all questions. 6 points = €1. Average earning €15.05. 2. One-pay (random-lottery system). One question, randomly selected afterwards, played for real. 1 point = €20. Average earning: €15.30.

  30. 30 Results

  31. Average correction curves 31

  32. 0.7 0.6 0.5 treatment one 0.4 0.3 0.2 treatment all 0.1 0 32 F( ρ ) 1 0.9 Individual corrections 0.8 ρ -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5

  33. 33 Figure 9.1. Empirical density of additivity bias for the two treatments Fig. b. Treatment t=ALL Fig. a. Treatment t=ONE 160 160 140 140 120 120 100 100 corrected 80 80 60 60 corrected 40 40 20 20 0 0 uncorrected uncorrected 0.2 0.6 0.4 0 0.2 0.4 0.6 0.6 0.4 0.2 0 0.2 0.4 0.6 For each interval [(j2.5)/100, (j+2.5)/100] of length 0.05 around j/100, we counted the number of additivity biases in the interval, aggregated over 32 stocks and 89 individuals, for both treatments. With risk-correction, there were 65 additivity biases between 0.375 and 0.425 in the treatment t=ONE, and without risk-correction there were 95 such; etc. Corrections reduce nonadditivity, but more than half remains: ambiguity generates more deviation from additivity than risk. Fewer corrections for Treatment t=ALL. Better use that if no correction possible.

  34. 34 Summary and Conclusion • Modern decision theories: proper scoring rules are heavily biased. • We correct for those biases, with benefits for proper-scoring rule community and for prospect-theory community. • Experiment: correction improves quality; reduces deviations from ("rational"?) Bayesian beliefs. • We do not remove all deviations from Baye- sian beliefs. Beliefs seem to be genuinely nonadditive/nonBayesian/sensitive-to- ambiguity.

More Related