Learning to Explain Voting Rules
I. Kang, Q. Han, L. Xia
AAMAS 2023
Abstract
Explaining the outcome of an election is a crucial task to address, especially in the case of complex voting rules. For those without a background in social choice, understanding the result of an election with a complex voting rule can be difficult. One possible way of explaining a voting rule is by using a decision tree structure, allowing the reader to follow the reasoning behind the outcome. This work proposes a methodology for explaining voting rules using decision-tree-based classifiers. Using simple features, the classifiers can be trained to a high accuracy while maintaining a human-readable size. We test this framework with well-established voting rules -- Copeland, Kemeny-Young, Ranked Pairs and Schulze -- to generate explanations for each election's outcome. We experiment with different decision tree algorithms on a synthetic dataset to generate explanations for the election outcome. We find that Copeland and Schulze under three candidates can be learned perfectly using an optimized decision tree algorithm, while cases of other rules have high accuracy experimentally.
Experiments: