1 / 86

Comparing Sequential Sampling Models With Standard Random Utility Models

Comparing Sequential Sampling Models With Standard Random Utility Models. J örg Rieskamp Center for Economic Psychology University of Basel, Switzerland 4/16/2012 Warwick. Decision Making Under Risk. French mathematicians (1654)

lindley
Download Presentation

Comparing Sequential Sampling Models With Standard Random Utility Models

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Comparing Sequential Sampling Models With Standard Random Utility Models JörgRieskamp Center forEconomicPsychology University of Basel, Switzerland 4/16/2012 Warwick

  2. Decision Making Under Risk French mathematicians (1654) • Rational Decision Making: PrinciplesofExpected Value Blaise Pascal Pierre Fermat

  3. Decision Making Under Risk • St. Petersburg Paradox • Expected utility theory (1738): Replacing the value of money by its subjective value Nicholas Bernoulli Daniel Bernoulli

  4. Expected Utility Theory • Axiomatic expected utility theory von Neumann & Morgenstern, 1947

  5. Frederick Mosteller1916 - 2006

  6. Probabilistic Nature of Preferential Choice theauthorsarguedthatwhenfirstoffering a betwith a certainprobabilityofwinning, andthenincreasingthatprobability "thereis not a sudden jump fromnoacceptancesto all acceptancesat a particularoffer, just as in a hearingexperimentthereis not a criticalloudnessbelowwhichnothingisheardandabovewhich all loudnessesareheard” instead“thebetistakenoccasionally, thenmoreandmoreoften, until, finally, thebetistakennearly all the time” Mosteller & Nogee, 1951, Journal of Political Economy, p. 374

  7. Mosteller’s & Nogee’s Study • experiment conducted over 10 weeks with 3 sessions each weak • participants repeatedly accepted or rejected gambles (N=30) Example - the participants had to accept or reject a simple binary gamble with a probability of 2/3 to loose 5 cents and a probability of 1/3 to win a particular amount - the winning amount varied between 5 and 16 cents

  8. Results: „Subject B-I"

  9. Participants decided between 180 pairs of gambles Receiving 15 Euros as a show-up fee One gamble was selected and played at the end of the experiment and the winning amounts were paid to the subjects Rieskamp (2008). JEP: LMC Experimental Study

  10. Task

  11. Expected values of the selected gambles

  12. Results: Expected values – Choice proportions

  13. How Can We Explain the Probabilistic Character of Choice? • Consumer products

  14. Explaining Probabilistic Character of ChoiceLogit model • Random utility theories: identically and independently extreme value distributed

  15. Probit Model • Random utility theories: identically and independently normal distributed

  16. Cognitive Approach to Decision Making • Consideringtheinformationprocessingstepsleadingto a decision • Sequentialsamplingmodels - Vickers, 1970; Ratcliff, 1978 - Busemeyer & Townsend, 1993 - Usher & McClelland, 2004

  17. Sequential Sampling Models People evaluate options by continuously sampling information about the options’ attributes Which attribute receives the attention of the decision maker fluctuates The probability that an attribute receives the attention of the decision maker is a function of the attribute‘s importance When the overall evaluation crosses a decision threshold a decision is made Rieskamp, Busemeyer, & Mellers (2006) Journal ofEconomicLiterature

  18. Dynamic Development of Preference Threshold Bound (internally controlled stopping-rule) (adapted from Busemeyer & Johnson, 2004) (adapted from Busemeyer & Johnson, 2004)

  19. Dynamic Development of Preference (adapted from Busemeyer & Johnson, 2004) Time Limit (externally controlled stopping-rule)

  20. Decision Making UnderRisk - DFT vs. CumulativeProspectTheory Rieskamp (2008), JEP:LMC - DFT vs. Proportional Difference Model Scheibehenne, Rieskamp, & Gonzalez-Vallejo, 2009, Cognitive Science - HierarchicalBayesianapproachexaminingthelimitationsofcumulativeprospecttheory Nilsson, Rieskamp, & Wagenmakers (2011), JMP

  21. Consumer Behavior Howgoodaresequentialsamplingmodelstopredictconsumerbehavior? - Multi-attribute decision field theory Roe, Busemeyer, & Townsend, 2001 versus - Logit and Probit Model Standard random utility models

  22. Multi-attribute Decision Field Theory Decay The preference state decays over time Interrelated evaluations of options Optionsare compared with each other Similar alternatives compete against each other and have a negative influence on each other

  23. 1. Calibration Experiment Participants (N=30) repeatedly decided between three digital cameras (72 choices) Eachcamera was describedbyfiveattributeswithtwoorthreeattributevalues (e.g. megapixel, monitorsize) Models` parameterswereestimatedfollowing a maximumlikelihoodapproach 2. Generalization Test Experiment Study 1

  24. Task

  25. Models’ parameters

  26. Logit– Probit: r = .99 MDFT - Logit : r = .94 MDFT - Probit: r = .94 Attribute Weigths

  27. Model ComparisonResults: Likelihood LikelihoodDifferences

  28. Results: Bayes Factor

  29. Generalization Test Experiment 2 Generating a newsetofoptions on thebasisoftheestimatedparametervaluesofexperiment 1 Comparingmodels‘ predictionswithoutfitting Study 1 – Generalization

  30. Comparing the observed choice proportions with the predicted choice proportions Distance Results

  31. Conclusion • Calibration Design: • LL: MDFT > Logit > Probit • Bayes factor: Logit > Probit > MDFT • Generalization Design: • Probit ≈ MDFT > Logit

  32. Study 2: Qualitative PredictionsInterrelated Evaluations of Options Decision Field Theory - Interrelated evaluations of options 1. attention specific evaluations 2. competition between similar options Logit / Probit- Evaluation of options are independent of each other

  33. Interrelated Evaluation of Options

  34. Interrelated Evaluation of Options

  35. Interrelated Evaluation of Options Tversky, 1972

  36. Interrelated Evaluation of Options

  37. Interrelated Evaluation of Options

  38. Interrelated Evaluation of Options

  39. Interrelated evaluation of options (Huber, Payne, & Puto, 1982)

  40. Interrelated Evaluation of Options

  41. Interrelated Evaluation of Options

  42. Is it possible to show the interrelated evaluations of options for all three situations in a within-subject design? Does MDFT has a substantial advantage compared to the logit and probit model in predicting people’s decisions? Do the choice effects really matter? Research Question

  43. Method: Matching Task Before the main study participants had to choose one attribute value to make both options equally attractive

  44. Method: Matching Task

  45. Main Study Choice Task: To the former 2 options (target + competitor) individual specified decoys were added. Always choices between three options.

  46. The decoy was added either in relationship to option A or in relationship to option B Pecularity: Decoyposition

  47. Interrelated Evaluation of Options

  48. Interrelated Evaluation of Options

  49. Interrelated Evaluation of Options

  50. Interrelated Evaluation of Options • If the third option had no effect on the preferences for A and B the average choice proportion for the target option should be 50%

More Related