1 / 30

Vote Elicitation with Probabilistic Preference Models: Empirical Estimation and Cost Tradeoffs

Vote Elicitation with Probabilistic Preference Models: Empirical Estimation and Cost Tradeoffs. Tyler Lu and Craig Boutilier University of Toronto. Introduction. New communication platforms can transform the way people make group decisions.

creola
Download Presentation

Vote Elicitation with Probabilistic Preference Models: Empirical Estimation and Cost Tradeoffs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Vote Elicitation with Probabilistic Preference Models: Empirical Estimation and Cost Tradeoffs Tyler Lu and Craig Boutilier University of Toronto

  2. Introduction New communication platforms can transform the way people make group decisions. How can computational social choice realize this shift? People Choices Computational Social Choice Consensus

  3. Introduction • Computational social choice • Aggregate full preferences (rankings) • Mostly study rank-based schemes (Borda, maximin, etc…) • Rank-based voting schemes rarely used in practice Problem:Cognitive and communication burden Our approach (recent work): Elicit just the right preferences to make good enough group decisions This work: Multi-round elicitation and probabilistic preference models to further reduce burdens Bob Cindy Alice 1 1 > 2

  4. Outline • Preliminaries • Multi-round Probabilistic Vote Elicitation • Methodology and Analysis for One-round • Experimental Results

  5. Preliminaries • Voters N = {1..n}; alternatives/items A = {a1…am} • Vote vi is a ranking of A • Complete profile v = (v1, …, vn) Bob Alice Cindy 1 voting rule r 2 3

  6. Score-based Rules • Many rules have score-based interpretation • Surrogate for “total group satisfaction” • E.g. Borda, Bucklin, maximin, Copeland, etc… • Associates a score for each item given full rankings s(a, v) • Winner has highest score Bob Alice Cindy Borda scores s( , v) = 7 1 s( , v) = 6 2 3 s( , v) = 5

  7. Partial Preferences • Partial vote pi is a partial order of A • Represented as a (consistent) set of pairwise comparisons • Higher order: top-k, bottom-k, … • Easy for humans to specify • Partial profile p Alice > How to make decision with partial preferences? > >

  8. Decision with Partial Preferences • Possible and necessary co-winners [Konczak, Lang’05] • Recently: minimax regret (MMR) [Lu, Boutilier’11] • Provides worst-case guaranteeon score loss w.r.t. true winner • Small MMR means good enough decision • Zero MMR means decision is optimal

  9. Minimax Regret Adversarial Best response

  10. Vote Elicitation • MMR: good choices with “right” partial votes • How to minimize amount of partial preference queries to make good decision? • MMR-based incremental elicitation [Lu, Boutilier’11] • Problem: must wait for response before next query

  11. Incremental Elicitation Woes • Each query is a (voter, pairwise comparison)pair • Exploits MMR, depends on all previous responses … > ? … ? > Bob annoyed at having to come back to answer query “interruption cost” YES NO Elicitor

  12. Our Solution:Multi-Round Batching • Send queries to many voters in each round MMR ≤ ε Recommendation: Round: 2 Give your next top 1 Round: 1 Give your top 2 Elicitor 1. 1. 1. 2. 2. 2. 3. 3. 3. Interruption cost reduced

  13. Multi-Round Probabilistic Vote Elicitation • Query class: “rank top-5”, “is A > B?”, etc… • Single request of preferences from voter • Have different cognitive costs • In each round π selects a subset of voters, and corresponding queries • Can be conditioned on previous round responses • Function ω, selects winner and stops elicitation • How to design elicitation protocol with provably good performance? • Worst-case not useful (for common rules) • Use probabilistic preference models to guide design

  14. Multi-Round Probabilistic Vote Elicitation • Distribution P over vote profiles • Induced distribution over runs of protocol (π, ω) • Can define distribution over performance metrics Quality of winner: Max regret, expected regret Amount of information elicited: equivalent #pairwise comparisons, or bits. Number of rounds of elicitation Tradeoffs! Depends on what costs are important.

  15. One-Round Protocol • Query type: top-k • “Rank your top-k most preferred” • Simple top-kheuristics [Kalech et al’11] • Necessary and possible co-winners • No theoretical guarantees on winner quality • Don’t provide guidance on good k • No tradeoff between winner quality and k

  16. Probably Approximately Correct (PAC) One-Round Protocol • Any rank-based voting rule • Any distribution P over profiles • What is a good k? • p[k] are partial votes after eliciting top-k k*: smallest k, with prob. ≥ 1 - δ, MMR(p[k]) ≤ ε • As long as we can sample from P, we can find “approximately” good k… • Samples can come from historical datasets, surveying, or generated from learned distribution

  17. Probably Approximately Correct One-Round Protocol General Methodology • Input: sample of vote profiles: v1, …, vt • MMR accuracy ε > 0 • MMR confidence δ > 0 • Sampling accuracy ξ > 0 • Sampling confidence η > 0 Find best the smallest k with

  18. Probably Approximately Correct One-Round Protocol Theorem: if sample size then for any P, with probability 1 - η, we have • ≤ k* • P[ MMR(p[ ]) ≤ ε] ≥ 1 - δ- 2ξ

  19. Practical Considerations • Sample size from theorem typically unnecessarily large • Empirical methodology can be used heuristically • Can generate histograms of MMR for profile samples from runs of elicitation • Can “eyeball” a good k • Can “eyeball” tradeoffs with MMR

  20. Experimental Results • First experiments with Mallows distribution • Rankings generated i.i.d. • Unimodal, with dispersion parameter • t = 100 profiles (for guarantees, use bounds for t) • Borda voting • Simulate runs of elicitation • Measure max regret and true regret • Normalize regret by number of voters

  21. Experimental Results x-axis is MMR per voter

  22. Experimental Results

  23. Experimental Results

  24. Experimental Results

  25. Experimental Results

  26. Experimental Results Sushi 10 alternatives 50 profiles, each with 100 rankings

  27. Experimental Results Dublin North 12 alternatives 73 profiles, each with 50 rankings

  28. Concluding Summary • Model of multi-round elicitation protocol • Highlights tradeoffs between quality of winner, amount of information elicited, and #rounds • Probabilistic preference profiles to guide design and performance instead of worst-case • One-round, top-k elicitation • Simple, efficient empirical methodology for choosing k • PACguarantees and sample complexity • With MMR solution concept, enables probabilistic and anytime guarantees previous works cannot achieve

  29. Future Work • Multi-round elicitation, top-k or pairwise comparisons • Fully explore above tradeoffs (associative different costs) • Assess expected regret and max regret

  30. The End

More Related