1 / 71

Truth-Revealing Social Choice

Truth-Revealing Social Choice. Lirong Xia. ADT-15 Tutorial. 2011 UK Referendum. Member of Parliament election: Plurality rule  Alternative vote? 68% No vs. 32% Yes. Ordinal Preference Aggregation: Social Choice. A profile. > >. A. B. C. Alice. social choice mechanism.

waltrip
Download Presentation

Truth-Revealing Social Choice

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Truth-Revealing Social Choice Lirong Xia ADT-15 Tutorial

  2. 2011 UK Referendum • Member of Parliament election: Plurality rule  Alternative vote? • 68% No vs. 32% Yes

  3. Ordinal Preference Aggregation: Social Choice A profile > > A B C Alice social choice mechanism > > A C A B Bob > > C B A Carol

  4. Ranking pictures [PGM+ AAAI-12] . . . . . . . . . . . . . . . . . > > . . . . . . . . . . . A C B C C B A B > > A B > > … Turker1 Turker2 Turkern

  5. Social choice Profile R1 R1* social choice mechanism R2 R2* Outcome … … Rn Rn* Ri, Ri*: full rankings over a set A of alternatives

  6. Applications: real world • People/agents often have conflicting preferences, yet they have to make a joint decision

  7. Applications: academic world • Multi-agent systems [Ephrati and Rosenschein91] • Recommendation systems [Ghoshet al. 99] • Meta-search engines [Dwork et al. 01] • Belief merging [Everaereet al. 07] • Human computation (crowdsourcing) [Mao et al. AAAI-13] • etc.

  8. How to design a good social choice mechanism? What is being “good”?

  9. Two goals for social choice mechanisms GOAL1: democracy GOAL2: truth Axiomatic social choice THIS TUTORIAL

  10. Outline • Axiomatic social choice • The Condorcet Jury Theorem (CJT) • Break • Four directions of extending CJT • Beyond CJT: the objective decision-making perspective

  11. Flavor of this tutorial • Research questions + Basic models • tip of the iceberg • More references • Survey by Nitzan and Paroush (online): Collective Decision Making and Jury Theorem • Survey by Gerlinga et al. [2005]: Information acquisition and decision making in committees: A survey • My personal summary, send me an email

  12. Computational social choice • Joerg’s text book • Handbook of Computational Social Choice

  13. Outline • Axiomatic social choice • The Condorcet Jury Theorem (CJT) • Break • Four directions of extending CJT • Beyond CJT: the objective decision-making perspective

  14. Common voting rules(what has been done in the past two centuries) • Mathematically, a social choice mechanism (voting rule) is a mapping from {All profiles} to {outcomes} • an outcome is usually a winner, a set of winners, or a ranking • m : number of alternatives (candidates) • n : number of agents (voters) • D=(P1,…,Pn) a profile • Positional scoring rules • A score vectors1,...,sm • For each vote V, the alternative ranked in the i-th position gets si points • The alternative with the most total points is the winner • Special cases • Borda, with score vector (m-1, m-2, …,0) • Plurality, with score vector (1,0,…,0) [Used in the US]

  15. An example • Three alternatives {c1, c2, c3} • Score vector(2,1,0) (=Borda) • 3 votes, • c1 gets 2+1+1=4, c2 gets 1+2+0=3, c3 gets 0+0+2=2 • The winner is c1 2 1 0 2 1 0 2 1 0

  16. The Kemeny rule • Kendall tau distance • K(V,W)= # {different pairwise comparisons} • Kemeny(D)=argminW K(D,W)=argminWΣP∈DK(P,W) • For single winner, choose the top-ranked alternative in Kemeny(D) • [Has a statistical interpretation] K( b ≻c≻a,a≻b≻c ) = 2 1 1

  17. …and many others • Approval, Baldwin, Black, Bucklin, Coombs, Copeland, Dodgson, maximin, Nanson, Range voting, Schulze, Slater, ranked pairs, etc…

  18. Q: How to evaluate rules in terms of achieving democracy? • A: Axiomatic approach

  19. Axiomatic approach(what has been done in the past 50 years) • Anonymity: names of the voters do not matter • Fairness for the voters • Non-dictatorship: there is no dictator, whose top-ranked alternative is always the winner • Fairness for the voters • Neutrality: names of the alternatives do not matter • Fairness for the alternatives • Consistency: if r(D1)∩r(D2)≠ϕ, then r(D1∪D2)=r(D1)∩r(D2) • Condorcet consistency: if there exists a Condorcet winner, then it must win • A Condorcet winner beats all other alternatives in pairwise elections • Easy to compute: winner determination is in P • Computational efficiency of preference aggregation • Hard to manipulate: computing a beneficial false vote is hard

  20. Which axiom is more important? • Some of these axiomatic properties are not compatible with others

  21. An easy fact • Theorem. For voting rules that selects a single winner, anonymity is not compatible with neutrality • proof: > > Alice > > Bob ≠ W.O.L.G. Anonymity Neutrality

  22. Not-So-Easy facts • Arrow’s impossibility theorem • Google it! • Gibbard-Satterthwaite theorem • Google it! • Axiomatic characterization • Template: A voting rule satisfies axioms A1, A2, A2  if it is rule X • If you believe in A1 A2 A3 are the most desirable properties then X is optimal • (anonymity+neutrality+consistency+continuity) positional scoring rules[Young SIAMAM-75] • (neutrality+consistency+Condorcet consistency) Kemeny[Young&LevenglickSIAMAM-78]

  23. Outline • Axiomatic social choice • The Condorcet Jury Theorem (CJT) • Break • Four directions of extending CJT • Beyond CJT: the objective decision-making perspective

  24. The Condorcet Jury theorem (CJT) [Condorcet 1785, Laplace 1812] • Given • two alternatives {a,b}. • competence 0.5<p<1, • Suppose • agents’ signals are i.i.d. conditioned on the ground truth • w/p p, the same as the ground truth • w/p 1-p, different from the ground truth • agents truthfully report their signals • The majority rule reveals ground truth as n→∞

  25. Why CJT is important? • It Justifies the democracy and wisdom of the crowd • It “lays, among other things, the foundations of the ideology of the democratic regime” [Paroush SCW-98]

  26. Proof • Group competence • Pr(maj(Pn)=a|a) • Pn: ni.i.d. votes given ground truth a • Random variableXj: takes 1 w/p p, 0 otherwise • encoding whether signal=ground truth • Σj=1nXj /nconverges to p in probability (Law of Large Numbers)

  27. Three parts of CJT The group competence • is higher than that of any single agent • increases in the group size n • goes to 1 as n→∞

  28. Proof of competence monotonicity • From 2k to 2k+1 • The extra vote breaks ties with higher probability in favor of the ground truth • k@a+k@b • From 2k+1to 2k+2 • (k+1)@a+k@b(k+1)@a+(k+1)@b • k@a+(k+1)@b(k+1)@a+(k+1)@b p (k+1)@a+k@b k@a+(k+1)@b 1-p

  29. Limitations of CJT more than two? • Given • two alternatives {a,b}. • competence 0.5<p<1, • Suppose • agents’ signals are i.i.d. conditioned on the ground truth • w/p p, the same as the ground truth • w/p 1-p, different from the ground truth • agents truthfully report their signals • The majority rule reveals ground truth as n→∞ heterogeneous agents? dependent agents? strategic agents? other rules?

  30. Outline • Axiomatic social choice • The Condorcet Jury Theorem (CJT) • Break • Four directions of extending CJT • Beyond CJT: the objective decision-making perspective

  31. Extensions • Dependent agents • Heterogeneous agents • Strategic agents • More than two alternatives

  32. An active area Myerson Shapley&Grofman Social Choice and Welfare American Political Science Review Games and Economic Behavior Mathematical Social Sciences Theory and Decision Public Choice Econometrica+ JET MSS special issue on ADT-15

  33. Extensions • Dependent agents • Heterogeneous agents • Strategic agents • More than two alternatives

  34. Does CJT hold for dependent agents? The group competence • is higher than that of any single agent • Not always (mimicking one leader) • increases in the group size n • Not always (mimicking one leader) • goes to 1 as n→∞ • Yes for some dependency models [Berg 92; Ladha 92, 93; Peleg&Zamir 12]

  35. Dependent agents • Positive correlations • agents are likely to receive similar signals even conditioned on the ground truth • Negative correlations • agents are likely to receive different signals • Conjecture: Positive correlations reduces group competence • positively correlated agents effectively reduces the number of agents

  36. Opinion leader model[Boland et al. 89] • One leader (Y), 2k followers (X1,…, X2k), same competence p • Pr(Y=1) = Pr(Xj=1)=p • Xj’sare independent conditioned on Y • Correlation r2 • Pr(Xj=1|Y=1) = p+r(1-p) • Pr(Xj=0|Y=0) =(1-p) +rp • Theorem. In the opinion leader model • when p>0.5 the group competence decreases in r • when p<0.5 the group competence increases in r • when p=0.5 the group competence does not change in r

  37. Common evidence model[Boland et al. 89] • One common evidence (E), 2k+1 agents (X1,…, X2k+1), same competence p • Pr(E=1) = Pr(Xj=1)=p • Xj’sare independent conditioned on E • Correlation r2 • Pr(Xj=1|E=1) = p+r(1-p) • Pr(Xj=0|E=0) =(1-p) +rp • Theorem. In the common evidence model • when p>0.5 the group competence decreases in r • when p<0.5 the group competence increases in r • when p=0.5 the group competence does not change in r

  38. Common evidence model[Dietrich and List 2004] G E • Ground truth G • Common evidence E • Given any ideal vote function f: EG • Competence pe=Pr(Xj=f(e)|e) • Theorem. The majority rule converges to f(e) as n→∞ … X1 Xn

  39. Extensions • Dependent agents • Heterogeneous agents • Strategic agents • More than two alternatives

  40. Does CJT hold for heterogeneous agents? The group competence • is higher than that of any single agent • Not always (1, 0.9.0.8,…) • increases in the group size n • Not always (1, 0.9.0.8,…) • goes to 1 as n→∞ • not always: pj=0.5+1/n • Yes under some condition [Berend&Paroush, 1998]

  41. Group competence for heterogeneous agents • Independent signals • Agent j’s competence is pj • Theorem[Berend&Paroush, 1998]. CJT holds if and only if • , or • for every sufficiently large n,

  42. Competence monotonicity[Berend&Sapir 05] • Given the competence {p1,…,pn} of nagents where pj≥0.5 • Ml: average competence of m randomly chosen agents • Theorem [Berend&Sapir 05]. For two alternatives and all l≤n-1 • Ml ≤ Ml+1if m is even • Ml = Ml+1if m is odd

  43. Optimal voting rule for two alternatives • Theorem [Shapley and Grofman 1984]. Given the competence {p1,…,pn} of n agents, the maximum likelihood estimator is the weighted majority voting with • Proof. Suppose the ground truth is a, the log likelihood of the profile is

  44. Extensions • Dependent agents • Heterogeneous agents • Strategic agents • More than two alternatives

  45. Does CJT hold for strategic agents? The group competence • is higher than that of any single agent • Not always (same-vote equilibrium) • increases in the group size n • Not always (same-vote equilibrium) • goes to 1 as n→∞ • Yes for some models and informative equilibrium

  46. Strategic voting • Common interest Bayesian voting game [Austen-Smith&Banks APSR-96] • two alternatives {a, b}, two signals {A,B}, a prior, Pr(signal|truth), • pa=Pr(signal=A|truth=a) • pb=Pr(signal=B|truth=b) • agents have the same utility function U(outcome, ground truth) =1 iff outcome = ground truth • sincere voting: vote for the alternative with the highest posterior probability • informative voting: vote for the signal • strategic voting: vote for the alternative with the highest expected utility

  47. Timeline of the game • Nature chooses a ground truth g • Every agent j receives a signal sj~Pr(sj|g) • Every agent computes the posterior distribution (belief) over the ground truth using Bayesian’s rule • Every agent chooses a vote to maximizes her expected utility according to her belief • The outcome is computed by the voting rule

  48. High level example • Two signals, two voters • Model: Pr( | ) = Pr( | ) = p>0.5 p 1-p Posterior: p p 1-p 1-p The other signal: Truthful agent: + my vote , winner: half/half half/half utility for voting : 1 0 0.5 0.5

  49. Sincere voting = informative voting? • Setting • Two alternatives {a, b}, two signals {A,B} • Three agents • pa=0.8, pb=0.6 • Uniform prior: Pr(a)=0.1, Pr(b)=0.9 • An agent receives A • Informative voting: a • posterior probability: • 0.1*0.8@a vs. 0.9*0.4@b • sincere voting: b

  50. Sincere voting = strategic voting? • Setting • Two alternatives {a, b}, two signals {A,B} • Three agents • pa=0.8, pb=0.6 • Uniform prior: Pr(a)=Pr(b)=0.5 • An agent receives A, other two agents are sincere/informative • Informative voting: a • posterior probability: 2/3@a+1/3@b • sincere voting: a • probability of a tie (other two agents’ votes are {a, b}) • 0.32|a, 0.48|b • Expected utility for voting a: 0.32*2/3 • Expected utility for voting b: 0.48*1/3 • Strategic voting: a

More Related