Robust Winners and Winner
Determination Policies under
Candidate Uncertainty
JOEL OREN, UNIVERSITY OF TORONTO
JOINT WORK WITH CRAIG BOUTILIER, JÉRÔME LANG AND HÉCTOR
PALACIOS.
1
Motivation – Winner Determination
under Candidate Uncertainty
•A committee, with preferences over
alternatives:
• Prospective projects.
• Goals.
a?
?b
a
b
c
≻
≻
b
c
a
≻
≻
?c
≻
• “Best” alternative depends on available ones.
2 voters
≻
•Costly determination of availabilities:
• Market research for determining the
feasibility of a project: engineering
estimates, surveys, focus groups, etc.
4 voters 3 voters
c
a
b
Winner
ac
2
Efficient Querying Policies for Winner
Determination
4 voters 3 voters
•Voters submit votes in advance.
•Query candidates sequentially, until
enough is known in order to a
determine the winner.
a?
?b
≻
b
c
a
≻
≻
a wins.
c
≻
𝑄(𝑏)
b
≻
•Example: 𝑄 𝑎
𝑌
a
≻
c
𝑌
2 voters
c
a
b
Winner
3
3 voters
2 voters
The Formal Model
•A set C of candidates.
•A vector, 𝒗, of rankings (a preference
profile).
•Set is partitioned:
1. 𝑌 ⊆ 𝐶 – a priori known availability.
2. 𝑈 = 𝐶 ∖ 𝑌 – the “unknown” set.
•Each candidate 𝑥 ∈ 𝐶 is available with
probability 𝑃(𝑥).
•Voting rule: 𝑟 𝒗 ∈ 𝐶 is the election
winner.
a
b
c
b
c
a
c
a
b
C
U
Y
b
c
a
(available) (unknown)
⋮
⋮
⋮
4
3 voters 2 voters
Querying & Decision Making
•At iteration 𝑖, submit query q(x), 𝑥 ∈ 𝐶.
a
b
c
•Information set 𝑄 = (𝑄 + , 𝑄 − ).
b
c
a
•Initial available set 𝐴 = 𝑌.
c
a
b
•Upon querying candidate 𝑥 ∈ 𝑈:
• If available: add to 𝐴.
• If unavailable: remove from 𝑈.
•𝑣(𝑋) – restriction of pref. profile to the candidate
set 𝑋 ⊆ 𝐶.
•Stop when 𝑄 is 𝑟-sufficient – no additional
querying will change 𝑤(𝑄) – the “robust” winner.
𝑄
+
a
𝑄
−
b
?
C
?
b
c
a
0.5
0.7
0.4
5
Computing a Robust Winner
•Robust winner: Given (𝐶, 𝑌, 𝒗, 𝑟), 𝑥 ∈ 𝑌 is a robust winner if 𝑥 = 𝑟(𝒗 𝐴 ) ∀𝐴 ⊇ 𝑌 .
•A related question in voting: [Destructive control by candidate addition] Candidate set
𝐶, disjoint spoiler set 𝐷, pref. profile 𝒗 over 𝐶 ∪ 𝐷, candidate 𝑥 ∈ 𝐶, voting rule 𝑟(⋅).
• Question: is there a subset 𝐵 ⊆ 𝐷, s.t. 𝑟 𝒗 𝐶 ∪ 𝐵 = 𝑥?
•Proposition: Candidate 𝑥 ∈ 𝑌 is a robust winner ⇔ there is no destructive control
against 𝑥, where the spoiler set is 𝑈 = 𝑋 ∖ 𝑌.
𝑈 = 𝐶 ∖Y
x
Y
𝑟 𝑣 𝑌 ∪=
𝐷𝑥 =𝑦
6
Computing a Robust Winner
•Proposition: Candidate 𝑥 ∈ 𝑌 is a robust winner ⇔ there is no destructive control
against 𝑥, where the spoiler set is 𝑈 = 𝑋 ∖ 𝑌.
•Implication: Pluarlity, Bucklin, ranked pairs – coNP-complete; Copeland, Maximin polytime tractable.
•Additional results: Checking if 𝑥 ∈ 𝑌 is a robust winner for top cycle, uncovered set,
and Borda can be done in polynomial time.
• Top-cycle & Uncovered set: prove useful criteria for the corresponding majority
graph.
7
The Query Policy
•Goal: design a policy for finding correct
winner.
a
•Can be represented by a decision tree.
b
b
•Example for the vote profile (plurality):
• abcde, abcde, adbec,
• bcaed, bcead,
• cdeab, cbade, cdbea
U
a b c d
𝑄
+
a c b
𝑄
−
b
a
wins
c
wins
c
a
wins
b
wins
c
a
wins
8
Winner Determination Policies as Trees
•r-Sufficient tree:
• Information set at each leaf is 𝑟-sufficient.
• Each leaf is correctly labelled with the winner.
•𝑐𝑞
𝑥
-- cost of querying candidate/node 𝑥.
a
{𝑎, 𝑏} ∈ 𝐴
b
a
wins
b
c
b
wins
c
•𝑐(𝑇) – expected cost of policy, over dist. of 𝐴 ⊆ 𝐶.
c
wins
a
wins
a
wins
9
Recursively Finding Optimal Decision
Trees
•Cost of a tree: 𝑐 𝑇 =
𝐴⊆𝐶 𝑐
𝜋 𝐴 .
a
•Can solve using a dynamic-programming approach.
•Running time: 3 𝑈 -- computationally heavy.
𝑦
𝑥
•For each node 𝑥 – a training set: Possible true
underlying sets A, that agree with 𝑄.
• Example 1: Ex = { 𝐴: 𝑥 ∈ 𝐴 ,
• Example 2: 𝐸𝑦 = {𝐴: 𝑥 ∉ 𝐴}.
b
a
wins
c
wins
b
c
a
wins
b
wins
c
a
wins
10
Myopically Constructing Decision Trees
•Well-known approach of maximizing information gain at every node until
reached pure training sets – leaves (C4.5).
•Mypoic step: query the candidate for the highest “information gain”
(decrease in entropy of the training set).
•Running time: 𝑂 2 𝑈
≪33
11
Empirical Results
• 𝐶 = 10, 100 votes, availability probability
𝑝 = 0.3, 0.5, 0.9.
𝝋 = 𝟎. 𝟖
𝒑=
0.3
0.5
0.9
•Dispersion parameter 𝜑 ∈ {0.8, 1.0}.
(𝜑 = 1 ≡ uniform distribution).
Method
Plurality, DP
4.1
3.4
2.7
•Tested for Plurality, Borda, Copeland.
Plurality, Myopic
4.1
3.5
2.8
•Preference distributions drawn i.i.d. from
Mallows 𝜑-distribution: probabilities decrease
exponentially with distance from a “reference”
ranking.
Borda, DP
3.7
2.7
1.7
Borda, Myopic
3.7
2.7
1.7
Average cost (# of queries)
12
Empirical Results
•Cost decrease as 𝑝 increases – [ less uncertainty about
the available candidates set].
•Myopic performed very close to the OPT DP alg.
•Not shown:
•Cost increases with the dispersion parameter –
“noisier”/more diverse preferences (not shown).
𝝋 = 𝟎. 𝟖
𝒑=
Method
0.3
0.5
0.9
Plurality, DP
4.1
3.4
2.7
Plurality, Myopic
4.1
3.5
2.8
Borda, DP
3.7
2.7
1.7
• 𝛿-Approximation: stop the recursion when training set is
Borda, Myopic
3.7
2.7
1.7
𝛿 – pure.
• For plurality, 𝜑 = 1, 𝑝 = 0.9, 𝑐 𝑇𝐷𝑃 = 5.42.
Average cost (# of queries)
• For 𝛿 = 0.001, 0.01, 0.1, 𝑐 𝑇𝛿−𝐷𝑃 = 4.97, 4.36, 3.04.
13
Additional Results
•Query complexity: expected number of queries under a worst-case
preference profile.
• Result: For Plurality, Borda, and Copeland, worst-case exp. query
complexity is Ω(|𝑈|).
•Simplified policies: Assume 𝑃 𝑥 ≥ 𝑝 for all 𝑥 ∈ 𝑈. Then there is a
simple iterative query policy that is asymptotically optimal as p 1.
14
Conclusions & Future Directions
•A framework for querying candidates under a probabilistic availability model.
•Connections to control of elections.
•Two algorithms for generating decision trees: DP, Myopic.
•Future directions:
1. Ways of pruning the decision trees (depend on the voting rules).
2. Sample-based methods for reducing training set size.
3. Deeper theoretical study of the query complexity.
15
© Copyright 2026 Paperzz