1 / 34

Can We Foresee To Disagree?

Can We Foresee To Disagree?. Robin Hanson Assoc. Prof. Economics George Mason University. We Disagree, Knowingly (sincerely, about “facts”). Stylized Facts: Argue in science/politics, bets on stocks/sports Especially regarding ability, when hard to check Less on “There’s another tree”

alpha
Download Presentation

Can We Foresee To Disagree?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Can We Foresee To Disagree? Robin Hanson Assoc. Prof. Economics George Mason University

  2. We Disagree, Knowingly(sincerely, about “facts”) • Stylized Facts: • Argue in science/politics, bets on stocks/sports • Especially regarding ability, when hard to check • Less on “There’s another tree” • Dismiss dumber, but not defer to smarter • Disagree not embarrass, its absence can • Given any free pair, find many disagree topics • Even people who think rationals should not disagree • Precise: we can publicly predict direction of other’s next opinion, relative to what we say

  3. Hold (Nearly) Firm P. van Inwagen ’96 A. Plantinga ’00 G. Rosen ‘01 R. Foley ‘01 T. Kelly ’05, ‘07 P. Pettit ’06 B. Weatherson ‘07 (Near) Equal Weight Sextus Empiricus K. Lehrer ‘76 H. Sidgwick ’81 R. Feldman ‘04 A. Elga ’06 B. Frances ‘07 D. Christensen ‘07 How Respond to Differing Peer? (If near same quality evidence, reasoning abilities) Philosophers weigh in

  4. Case 1: Perfect Bayesians x1 x3 x5 A t = x1 + x2 + x3 + … xi ~ N(0,Vi) B A sees x1, reports r1 = E[t| x1] = x1 B sees x2, r1, reports r2 = E[t| x2, r1] = x2 + r1 A sees x3, r2, reports r3 = E[t| x3, r2, r1]  = x3 + r2 B sees x4, r3, reports r4 = E[t| x4, r3, r2]  = x4 + r3 A sees x5, r4, reports r5 = E[t| x5, r4, r3]  = x5 + r4 … truth time r1 r2 r3 r4 r5 x2 x4

  5. Case 2: Noisy Bayesians A reports r1 = e1 + E[t|x1] = e1+ x1 ei ~ N(0,Ei) B reports r2 = e2 + E[t|x2, r1] = e2 + x2 + u1* r1 A reports r3 = e3 + E[t|x3, r2, r1] = e3 + x3 + u2* r2 + v1* r1 B reports r4 = e4 + E[t|x4, r3, r2]  = e4 + x4 + u3* r3 + v2* r2 A reports r5 = e5 + E[t|x5, r4, r3]  = e5 + x5 + u4* r4 + v3* r3 Next Report Diff Last Report Diff i=20

  6. Case 3: Bayesian Wannabes A: r1 = E[t| e1 + E[t|x1] = w1 x1 + noise B: r2 = E[t| e2 + E[t|x2,r1]] = w2*x2 + u1*r1 + noise A: r3 = E[t| e3 + E[t|x3,r2,r1]] = w3*x3 + u2*r2 + v1*r1 + noise B: r4 = E[t| e4 + E[t|x4,r3,r2]]  = w4*x4 + u3*r3 + v2*r2 + noise A: r5 = E[t| e5 + E[t|x5,r4,r3]]  = w5*x5 + u4*r4 + v3*r3 + noise

  7. as in statistics, comp. sci., physics, econ. possible worlds take on claim, know, info, merge & common info degree of belief (set) “if think enough” ideal constraints, not recipe P(A)+P(not A)=1 not say how to fix failure If enough, rational beliefs unique NOT Necessarily: anything goes, actual mental state, offer bets on all, exact beliefs, perfect logic, sure evidence, Bayes’ rule, prior as first beliefs, symmetric prior, common prior, accept/confirm `Bayesian’ ≡ Probability, Info Theory

  8. Aumann 1976 assumed: Any finite info Of possible worlds Common knowledge Of exact E1[x], E2[x] Would say next For Bayesians With common priors If seek truth, not lie, or misunderstand We Can’t Agree to Disagree Nobel Prize 2005 his most cited paper by x2 Agent 1 Info Set Agent 2 Info Set Common Knowledge Set Aumann (1976) Annals of Statistics

  9. Aumann in 1976: Any finite info Of possible worlds Common knowledge Of exact E1[x], E2[x] Would say next For Bayesians With common priors If seek truth, not lie or misunderstand Since generalized to: ∞ info, unsure evidence Impossible worlds Common belief A f(•, •), or who max Last ±(E1[x] - E1[E2[x]]) At core, or wannabe Symmetric prior origins We Can’t Agree to Disagree

  10. E C B1 B2 Common Belief 2(1-q) = max % can q-agree that disagree re X Monderer & Samet (1989) Games and Economic Behavior

  11. Aumann in 1976: Any finite info Of possible worlds Common knowledge Of exact E1[x], E2[x] Would say next For Bayesians With common priors If seek truth, not lie or misunderstand Since generalized to: ∞ info, unsure evidence Impossible worlds Common belief A f(•, •), or who max Last ±(E1[x] - E1[E2[x]]) At core, or wannabe Symmetric prior origins We Can’t Agree to Disagree

  12. Disagreement Is Unpredictable Hanson (2002) Econ. Lett. 77:365–369.

  13. Strong Bayesian Response Opinion Ei[X] Bayes E1[E2[X]] E1[X] Equal Weight Opinion Only Hold Firm E2[X] when clearly tell any future date Time

  14. Joint Random Walk Opinion Ei[X] For any Bayesians (or wannabes) with common prior, announced opinions follow joint random walk. Time

  15. A gets clue on X A1 = A’s guess of X A told Sign(B2-B1) A2 = A’s guess of X Loss (A1-X)2+(A2-X)2 B gets clue on X B told A1 B1 = B’s guess of X B2 = B’s guess of A2 Loss (B1-X)2+(B2-A2)2 Experiment Shows Disagree E.g.: What % U.S. prefer dogs to cats? What is die sum? time Example 30% 70% 40% “low” 40% A neglects clue from B B reliably predicts neglect

  16. Aumann in 1976: Any finite info Of possible worlds Common knowledge Of exact E1[x], E2[x] Would say next For Bayesians With common priors If seek truth, not lie or misunderstand Since generalized to: ∞ info, unsure evidence Impossible worlds Common belief A f(•, •), or who max Last ±(E1[x] - E1[E2[x]]) At core, or wannabe Symmetric prior origins We Can’t Foresee To Disagree

  17. Generalized Beyond Bayesians • Possibility-set agents: if balanced (Geanakoplos ‘89), or “Know that they know” (Samet ‘90), … • Turing machines: if can prove all computable in finite time (Medgiddo ‘89, Shin & Williamson ‘95) • Ambiguity Averse (maxact minp in S Ep[Uact]) • Many specific heuristics … • Bayesian Wannabes Agree Disagree e.g., Maccheroni, Marinacci & Rustichini (2006) Econometrica 1 1 0 0

  18. Complexity of Agreement Can exchange 100 bits, get agree to within 10% (fails 10%). Can exchange 106 bits, get agree to within 1% (fails 1%). “We first show that,for two agents with a common prior to agree within ε about the expectation of a [0,1] variable with high probability over their prior, it suffices for them to exchange order 1/ε2 bits. This bound is completely independent of the number of bits n of relevant knowledge that the agents have. … we give a protocol ... that can be simulated by agents with limited computational resources.” Aaronson (2005) Proc. ACM STOC, 634-643.

  19. Bayesian Wannabe • A general model of compute-limited agents • Try to be Bayesian, but fail, and so have belief errors • Reason about own Error = Actual – Bayesian • Can consider arbitrary meta-evidence (see Kelly ’05) • Assume two B.W. agree to disagree (A.D.) & maintain a few easy-to-compute belief relations: • A.D. regarding any Xw implies A.D. re Yw=Y. • Since info is irrelevant to estimating Y, any A.D. implies a pure error-based A.D. • So if pure error A.D. irrational, all are. Hanson (2003) Theory & Decision

  20. Consider Bayesian Wannabes Prior Info Errors Pure Agree to Disagree? Disagree Sources Yes No Yes Either combo implies pure version! Ex: E1[p] @ 3.14, E2[p]@ 22/7

  21. Aumann in 1976: Any finite info Of possible worlds Common knowledge Of exact E1[x], E2[x] Would say next For Bayesians With common priors If seek truth, not lie or misunderstand Since generalized to: ∞ info, unsure evidence Impossible worlds Common belief A f(•, •), or who max Last ±(E1[x] - E1[E2[x]]) At core, or wannabe Symmetric prior origins We Can’t Foresee To Disagree

  22. Which Priors Are Rational? Prior = counterfactual belief if same min info • Extremes: all priors rational, vs. only one is • Can claim rational unique even if can’t construct (yet) • Common to say these should have same prior: • Left & right brain halves; me-today & me-Sunday • Bad prior origins, e.g. random brain changes • You must think your differing prior is special, but • Standard genetics says DNA process same for all • Standard sociology says your culture process similar Hanson (2006) Theory & Decision

  23. Standard Bayesian Model Agent 1 Info Set A Prior Agent 2 Info Set Common Kn. Set

  24. An Extended Model Multiple Standard Models With Different Priors

  25. My Differing Prior Was Made Special My prior and any ordinary event E are informative about each other. Given my prior, no other prior is informative about any E, nor is E informative about any other prior. Corollaries My prior only changes if events are more or less likely. If an event is just as likely in situations where my prior is switched with someone else, then those two priors assign the same chance to that event. Only common priors satisfy these and symmetric prior origins. Hanson (2006) Theory & Decision

  26. A Tale of Two Astronomers • Disagree if universe open/closed • To justify via priors, must believe: “Nature could not have been just as likely to have switched priors, both if open and if closed” “If I had different prior, would be in situation of different chances” “Given my prior, fact that he has a particular prior says nothing useful” All false for genetic influences on brothers’ priors!

  27. Aumann in 1976: Any finite info Of possible worlds Common knowledge Of exact E1[x], E2[x] Would say next For Bayesians With common priors If seek truth, not lie or misunderstand Since generalized to: ∞ info, unsure evidence Impossible worlds Common belief A f(•, •), or who max Last ±(E1[x] - E1[E2[x]]) At core, or wannabe Symmetric prior origins We Can’t Foresee To Disagree

  28. Theory or data wrong? Few know theory? Infeasible to apply? We lie? Exploring issues? Misunderstandings? We not seek truth? Each has prior: “I reason better” ? They seem robust Big change coming? Need just a few adds We usually think not, and effect is linear But we complain of this in others Why Do We Disagree?

  29. An Answer: We Self-Deceive • We biased to think better driver, lover, … “I less biased, better data & analysis” • Evolutionary origin: helps us to deceive • Mind “leaks” beliefs via face, tone of voice, … • Leak less if conscious mind really believes • Beliefs like clothes • Function in harsh weather, fashion in mild • When made to see self-deception, still disagree • So at some level we accept that we not seek truth

  30. Life Without Knowing Disagree • Less optimistic war, competition, innovation • Little “for their own good” paternalism • Fewer “not invented here” innovation barriers • No belief-based identities: religious, political, academic, national, sporty, artistic • Far less speculative trade, surveys harder • But prices and surveys more influential • To merge info, say “my impression” vs. belief

  31. How Few Meta-Rationals? Meta-Rational (MR) = Seek truth, not lie, not self-favoring-prior, know disagree theory basics • Rational beliefs linear in chance other is MR • MR who meet, talk long enough, see other MR • Then joint opinion path becomes random walk • We see virtually no such pairs, • N each long talk 2T others, makes ~N*T*(%MR)2 pairs • 2 billion ea. talk to 100, if 1/10,000 MR, see 1000 pairs • So very few MR, or few ever talk long enough • See none even among accept disagree irrational

  32. When Justified In Disagree? When others disagree, so must you -- but not with average • Standard: neutral observer using opinion distribution? • Note: more meta-rational beats smarter, more informed • Folk clues: IQ/idiocy, self-interest, emotional arousal, self-deception, formal analysis, willing to bet • Beware self-serving clue selection! • Need data on: who tends to be right when disagree? • Not same as who more right on random test! • Tetlock shows “hedgehogs” wrong more on foreign events • One media analysis favors: longer, in news style, by men, in topical publication with more readers and awards • Want institution to emphasize/encourage the accurate!

  33. Aumann in 1976: Any finite info Of possible worlds Common knowledge Of exact E1[x], E2[x] Would say next For Bayesians With common priors If seek truth, not lie or misunderstand Since generalized to: ∞ info, unsure evidence Impossible worlds Common Belief A f(•, •), or who max Last ±(E1[x] - E1[E2[x]]) At core, or Wannabe Symmetric prior origins We Can’t Foresee to Disagree

  34. Common Concerns • I’m smarter, understand my reasons better • My prior is more informed • Different models/assumptions/styles • Lies, ambiguities, misunderstandings • Logical omniscience, act non-linearities • Disagree explores issue, motivates effort • We disagree on disagreement • Bayesian “reductio ad absurdum”

More Related