1 / 15

Robust Winners and Winner Determination Policies under Candidate Uncertainty

Robust Winners and Winner Determination Policies under Candidate Uncertainty. Joel oren , university of toronto Joint work with Craig boutilier , JÉrôme Lang and HÉctor Palacios . Motivation – Winner Determination under Candidate Uncertainty. 3 voters. 2 voters. 4 voters.

chiara
Download Presentation

Robust Winners and Winner Determination Policies under Candidate Uncertainty

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Robust Winners and Winner Determination Policies under Candidate Uncertainty Joel oren, university of toronto Joint work with Craig boutilier, JÉrôme Lang and HÉctor Palacios.

  2. Motivation – Winner Determination under Candidate Uncertainty 3 voters 2 voters 4 voters • A committee, with preferences over alternatives: • Prospective projects. • Goals. • Costly determination of availabilities: • Market research for determining the feasibility of a project: engineering estimates, surveys, focus groups, etc. • “Best” alternative depends on availableones. ? b a ? b c a ? c c b a b a c Winner c a

  3. Efficient Querying Policies for Winner Determination 3 voters 2 voters 4 voters • Voters submit votes in advance. • Query candidates sequentially, until enough is known in order to a determine the winner. • Example: a wins. ? b a ? b c a c c b a b a c Winner

  4. The Formal Model 3 voters 2 voters b • A set C of candidates. • A vector, , of rankings (a preference profile). • Set is partitioned: • – a priori known availability. • – the “unknown” set. • Each candidate is available with probability . • Voting rule: is the election winner. c a c b a b a c U (unknown) C Y (available) b c a

  5. 3 voters 2 voters Querying & Decision Making b c a • At iteration submit query q(x), . • Information set . • Initial available set . • Upon querying candidate : • If available: add to . • If unavailable: remove from . • – restriction of pref. profile to the candidate set . • Stop when is -sufficient – no additional querying will change – the “robust” winner. c b a b a c C a ? ? b c a b 0.5 0.7 0.4

  6. Computing a Robust Winner • Robust winner: Given , is a robust winner if . • A related question in voting: [Destructive control by candidate addition]Candidate set , disjoint spoiler set , pref. profile over , candidate , voting rule . • Question: is there a subset , s.t.? • Proposition: Candidate is a robust winner there is no destructive control against, where the spoiler set is . Y Y x

  7. Computing a Robust Winner • Proposition: Candidate is a robust winner there is no destructive control against, where the spoiler set is . • Implication: Pluarlity, Bucklin, ranked pairs – coNP-complete; Copeland, Maximin-polytimetractable. • Additional results: Checking if is a robust winner for top cycle, uncovered set, and Borda can be done in polynomial time. • Top-cycle & Uncovered set: prove useful criteria for the corresponding majority graph.

  8. The Query Policy • Goal: design a policy for finding correct winner. • Can be represented by a decision tree. • Example for the vote profile (plurality): • abcde, abcde, adbec, • bcaed, bcead, • cdeab, cbade, cdbea a b b a wins c b wins c b a c U c wins a wins a wins d b a b c

  9. Winner Determination Policies as Trees • r-Sufficient tree: • Information set at each leaf is -sufficient. • Each leaf is correctly labelled with the winner. • --cost of querying candidate/node . • – expected cost of policy, over dist. of . a b b a wins c b wins c c wins a wins a wins

  10. Recursively Finding Optimal Decision Trees • Cost of a tree: . • For each node – a training set: Possible true underlying sets A, that agree with . • Example 1: • Example 2: . • Can solve using a dynamic-programming approach. • Running time: -- computationally heavy. a b b a wins c b wins c c wins a wins a wins

  11. Myopically Constructing Decision Trees • Well-known approach of maximizing information gain at every node until reached pure training sets – leaves (C4.5). • Mypoicstep: query the candidate for the highest “information gain” (decrease in entropy of the training set). • Running time:

  12. Empirical Results • 100 votes, availability probability . • Dispersion parameter . ( uniform distribution). • Tested for Plurality, Borda, Copeland. • Preference distributions drawn i.i.d. from Mallows -distribution: probabilities decrease exponentially with distance from a “reference” ranking. Average cost (# of queries)

  13. Empirical Results • Cost decrease as increases – [ less uncertainty about the available candidates set]. • Myopic performed very close to the OPT DP alg. • Not shown: • Cost increases with the dispersion parameter – “noisier”/more diverse preferences (not shown). • -Approximation: stop the recursion when training set is – pure. • For plurality, , , . • For , . Average cost (# of queries)

  14. Additional Results • Query complexity: expected number of queries under a worst-case preference profile. • Result: For Plurality, Borda, and Copeland, worst-case exp. query complexity is . • Simplified policies: Assume for all . Then there is a simple iterative query policy that is asymptotically optimal as .

  15. Conclusions & Future Directions • A framework for querying candidates under a probabilistic availability model. • Connections to control of elections. • Two algorithms for generating decision trees: DP, Myopic. • Future directions: • Ways of pruning the decision trees (depend on the voting rules). • Sample-based methods for reducing training set size. • Deeper theoretical study of the query complexity.

More Related