1 / 56

“An Experimental Study of Self-Relevance in Information Processing”

“An Experimental Study of Self-Relevance in Information Processing”. Seda Erta ç University of Chicago June 30, 2007. Questions: Do individuals process information as they should? How Bayesian are they? Does this depend on the relevance of the information to the self?

charla
Download Presentation

“An Experimental Study of Self-Relevance in Information Processing”

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. “An Experimental Study of Self-Relevance in Information Processing” Seda Ertaç University of Chicago June 30, 2007

  2. Questions: • Do individuals process information as they should? How Bayesian are they? • Does this depend on the relevance of the information to the self? • Self-serving use of information? • Study Bayesian updating with different types of information

  3. The Experiment MAIN IDEA: Compare two theoretically equivalent updating problems (within-person) Processing of information when information is: • Self-relevant (relative performance feedback) • Addition task (11+25+34+40+91=?) • Verbal task (GRE verbal) 2. Irrelevant to the self (a statistical urn problem)

  4. DESIGN: Performance Rounds • Task Performance Stage (Piece-rate compensation) • Submit Initial Estimates of Relative Performance (top, middle, or bottom of the distribution) • Receive performance feedback (“top” vs. “not top”, some new sessions with bottom/not bottom) • Submit revised estimates of performance • Accurate beliefs compensated using a quadratic scoring rule.

  5. TOP 20% Belief Elicitation in the Performance Rounds: Assign probabilities to each of the following three states: MIDDLE 60% BOTTOM 20%

  6. Non-Performance Rounds • Computer randomly picks one of three states (top, middle, bottom) according to a probability distribution->Comes from each subject’s own submitted priors in the task rounds • Subject sees the prior probabilities of each state being picked • Assigns a probability to each of the three states • Feedback is received (top/not top) • Revised probabilities about the three states are submitted. • Beliefs compensated using a quadratic scoring rule.

  7. Procedures: • UCLA undergraduates • 200 participants. • 62% female. • 29% econ/business, 36% natural sciences+engineering, 35% other social sciences • 10 participants in each group. • Depending on session: 16 or 24 rounds played • 1,5 hours

  8. Initial Assessments in the Performance Rounds • More positive self-assessments in the addition task than the verbal (individual and group) • No difference in “confidence in assessment” across tasks 3. Women have less positive self-assessments than men, especially in the verbal task.

  9. Response to Information in the Performance Rounds When the information “not top” is received:

  10. Response to Information in Performance Rounds The probability assigned to “middle” is lower than it should be (z=-3.88, t=-3.75) The bias is very significantly different from zero. (p=0.0000)

  11. Relation between initial “self-confidence” and updating: BIAS IN THE POSTERIOR FOR MIDDLE:

  12. Bias In the Non-Performance Rounds: Non-performance rounds with objective priors

  13. Comparison of Performance and Non-Performance Rounds Absolute bias is greater in the performance rounds (bias=0.10 vs. bias=0.7) t=3.46, p=0.0005 Restricting attention to “risk-neutral” cases, we get (bias=0.10 vs. bias=0.045): t=6.15, p=0.000

  14. Conservative View of BU: In the non-performance task, errors occur in either direction at the same frequency. Performance rounds: 77.5% revised state correct 19% revised state reflects overuse 3.5% revised state reflects underuse Non-Performance Rounds 85.5 % revised state correct 7% revised state reflects overuse 7.5 % revised state reflects underuse

  15. Comparison of Performance and Non-Performance Rounds (within person) • Look at cases where priors are exactly the same and feedback is also the same, also exclude the “trivial” cases of updating. • Average difference=Pr(mid)NP-Pr(mid)P=0.044 • Both Wilcoxon and t-tests confirm that the probability attributed to “middle” is higher in the non-performance rounds (z=3.36, t=3.81)

  16. Are subjects better Bayesian updaters with self-relevant information? The absolute bias is significantly higher in the performance rounds. (t=6.98, p=0.000)

  17. TOP ?% Task 3—Belief Updating under Ambiguity Only one of the objective probabilities is revealed. MIDDLE X % BOTTOM ?%

  18. Frequency of Types of Errors

  19. Does the QSR work? 1440 instances where we know true and submitted priors. 685 of them have the exact same priors for the 3 states. In 30% of the deviations, the highest submitted probability is higher than the highest true probability, and the lowest submitted probability is lower. In 9% of the deviations, the reverse is true. Risk-aversion does not seem to hold. (Data on ind. risk preferences->to be analyzed) Experience seems to help in reducing deviations

  20. Summary: • Not much support for use of information in a self-serving way. • If anything, individuals pay undue attention to performance information (may be because they are not confident enough in their priors: ambiguity?). • Confidence and direction of information seems to also matter. • Individuals seem to be better Bayesian updaters when processing information irrelevant to themselves.

  21. Posterior Probability DifferencePerformance vs. Non-Performance Rounds (ProbNP-ProbP)

  22. Gender and Use of Information • There is no significant difference between the genders in terms of information processing in the statistical problem. • No significant difference when they get the information top/not top (still, women more likely to go to “bottom”). • Significant difference when the information is “not bottom” (men are much more likely to go to top, bias 0.05 versus ~0).

  23. Gender and Use of Information(continued) Does this come from confidence? Among the subgroup of self-confident people, there is no significant difference.

  24. A Conservative Measure of BU: 77% of the time people choose the correct state.

  25. Initial Assessments in the Performance Rounds • Overconfidence at the group level? • Overconfidence at the individual level? States perceived as most likely by the subjects:

  26. Sessions First Set of Sessions: 1-7, order: Performance+Non-Performance Feedback Type: Top/Not Top More Sessions to Control For Some Issues: Sessions 9-11 Perf+Non-Perf+Ambiguity Feedback Type: Bottom/Not Bottom Sessions 12-15 Statistical+Task+Ambiguity Feedback Type: Top/Not Top Session 16: Accurate beliefs not compensated

  27. Quadratic Scoring Rule: If really in the top: Payoff=50+100 pT-50(pT2+pM2+pB2) If really in the middle: Payoff=50+100 pM-50(pT2+pM2+pB2) If really in the bottom: Payoff=50+100 pB-50(pT2+pM2+pB2) ----------------------------------------------------- • Payoffs are min. if assign 1 to a wrong statement, max. if assign 1 to a true statement • Submitting true beliefs is optimal for a risk-neutral expected utility maximizer.

  28. TOTAL EARNINGS= EARNINGS FROM PERFORMANCE (piece rate per question solved) + EARNINGS FROM PRE-INFO BELIEFS+ EARNINGS FROM POST-INFO BELIEFS

  29. The Importance of Good Relative Performance in the Verbal Task, Females

  30. The Importance of Good Relative Performance in the Addition Task, Females

  31. Survey Responses and Gender • 65% of men, 50% of women think addition is more reflective of overall ability. • 23% of men, 31% of women think addition is easier. • 73% of men, 63% of women say they enjoyed the addition task more. • Men care about the addition task more than women do. • Women and men are not different in their level of caring about performance in the verbal task.

  32. The Importance of Good Relative Performance in the Verbal Task, Males

  33. The Importance of Good Relative Performance in the Addition Task

  34. Hypotheses: 1. Bayesian Updating: • Pr(T|“Top”)=1 • Pr(T|“Not Top”)=0 • Pr(M|“Not Top”)=PrM/(PrM+PrB) • Pr(B|“Not Top”)=PrB/(PrM+PrB) Likewise for bottom/not bottom. 2. Same posteriors in the performance and non-performance rounds, if the priors and the received information are the same.

  35. Some Notes on Survey Responses • 23% of men and 31% of women find the addition task to be more difficult. • 65% of men think that addition is more indicative of general intelligence, whereas 50% of women do so.

  36. The Importance of Good Relative Performance in the Verbal Task

  37. The Importance of Good Relative Performance in the Addition Task, Males

  38. Summary of Survey Answers Men do better in the addition task (z=2.71) No significant difference in the verbal task. Men (women) get negative feedback 74% (83%) of the time in the addition task, and 80% (79.5%) of the time in the verbal task. (in 74% of the instances where they are faced with feedback).

  39. Things to be Done: Using the Existing Data: Individual-level analysis: • Beliefs • How they use information (do we have consistently good “Bayesians”?) • Stated importance of relative performance

  40. Design Issues • Order Effects (non-task first?) • Type of feedback (positive vs. negative) • Objective vs. subjective priors (ambiguity?) • Potential issues with the quadratic scoring rule =>Additional Sessions

  41. End-of-Experiment Survey • Which task do you think is most reflective of a person’s overall ability? • Rank tasks in terms of difficulty for you. • Rank tasks in terms of enjoyability for you. • How important was it to you to be better than others in task X (0 to 10 scale)? • Did the positive(negative) information you received affect your morale? (-10 to 10 scale) • Did the positive (negative) information you received affect your subsequent performance? (-10 to 10 scale) • Gender, major

  42. Taking only the subsample of subjects that say they understood perfectly does not change the results about information processing.

  43. Task Performance Average # of questions solved:

  44. A Conservative Measure of BU, Non-Task: 87% of the time people choose the correct state. (contrast with 77 in the task case)

More Related