Designing Rules to Pick a Rule: Aggregation by Consistency

ICLR 2026 Conference SubmissionAnonymous Authors
rank aggregationrule picking rulesconsistency
Abstract:

Rank aggregation has critical applications for developing AI agents, as well as for evaluating them. However, different methods can give rise to significantly different aggregate rankings, impacting these applications. Indeed, work in social choice and statistics has produced many rank aggregation methods, each with its desirable properties, but also with its limitations. Given this trade-off, how do we decide which aggregation rule to use, i.e., what is a good rule picking rule (RPR)? In this paper, we design a data-driven RPR that identifies the best method for each dataset without assuming a generative model. The principle behind our RPR is to maximize consistency if the data collection process was repeated. We show that our method satisfies several consistency-related axioms failed by a wide class of natural RPRs. While we prove that the computational problem of maximizing consistency is hard, we provide a sampling-based implementation that is efficient in practice. We run this implementation on known statistical models to experimentally demonstrate its desirable properties, as well as on real-world data where our method provides important insights into how to improve consistency.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper proposes a data-driven framework for selecting rank aggregation methods based on maximizing consistency across repeated data collection processes. It resides in the 'Method Selection Frameworks and Meta-Analysis' leaf, which contains only two papers total (including this one). This represents a notably sparse research direction within the broader taxonomy of 50 papers. The sibling paper addresses ranking result aggregation evaluation, suggesting that meta-level frameworks for choosing aggregation methods remain underexplored compared to the development of new aggregation algorithms themselves.

The taxonomy reveals substantial activity in adjacent areas: 'Empirical Comparisons and Benchmarking' contains three papers comparing methods experimentally, while 'Comprehensive Surveys and Reviews' includes two systematic overviews. The broader 'Algorithm Design and Optimization' branch encompasses 14 papers across four paradigms (graph-based, weighted, stochastic, and alternative approaches). This structural context indicates that while the field has produced many aggregation techniques and some comparative studies, principled frameworks for method selection—especially those not assuming specific generative models—occupy a relatively underdeveloped niche between algorithm design and empirical evaluation.

Among 30 candidates examined, none clearly refute the three main contributions: the RPR framework concept (10 candidates, 0 refutable), the Aggregation by Consistency method (10 candidates, 0 refutable), and the axiomatic analysis (10 candidates, 0 refutable). This limited search scope suggests that within the examined literature, the consistency-maximization principle for method selection appears novel. However, the small candidate pool and the sparse population of the target taxonomy leaf indicate that more exhaustive searches in related meta-analysis domains or theoretical social choice literature might reveal additional relevant prior work not captured by semantic similarity to this paper's framing.

Given the limited 30-candidate search and the sparse two-paper taxonomy leaf, the work appears to address a genuine gap in providing principled, data-driven method selection without generative assumptions. The absence of refuting candidates across all contributions suggests novelty within the examined scope, though the small search scale and the paper's position in an underpopulated research direction warrant caution. The analysis covers top-K semantic matches and does not exhaustively survey theoretical social choice or meta-learning literature that might contain related frameworks.

Taxonomy

Core-task Taxonomy Papers
50
3
Claimed Contributions
30
Contribution Candidate Papers Compared
0
Refutable Paper

Research Landscape Overview

Core task: Selecting appropriate rank aggregation methods for given datasets. The field of rank aggregation has evolved into a rich landscape organized around five main branches. Rank Aggregation Algorithm Design and Optimization focuses on developing novel algorithmic techniques and improving computational efficiency, including weighted approaches like Weighted Rank Aggregation[3] and optimization-based methods such as Kemeny Quantum Optimization[22]. Application Domains and Task-Specific Implementations addresses domain-specific challenges in areas ranging from bioinformatics (Robust Biological Aggregation[7], Computational Pathology Aggregation[8]) to recommendation systems (Recommendation Systems Comparison[5], Group Recommendations[17]) and feature selection (Feature Selection Techniques[6], Ensemble Feature Selection Stability[2]). Fairness-Aware and Privacy-Preserving Aggregation tackles ethical considerations through works like Fair Rank Aggregation[29] and Private Pairwise Rankings[4]. Comparative Studies and Methodological Frameworks provides meta-level analysis through comprehensive surveys (Rank Aggregation Survey[12]) and method comparison tools (pyRankMCDA[18]). Finally, Specialized Aggregation Scenarios and Extensions explores emerging contexts such as partial rankings (Partial Label Ranking[19]) and federated settings (Federated Educational Scoring[42]). A particularly active tension exists between algorithm-centric optimization work and application-driven method selection frameworks. While many studies develop sophisticated aggregation techniques for specific domains, fewer works systematically address how practitioners should choose among competing methods for their particular datasets. Aggregation by Consistency[0] sits squarely within the Comparative Studies and Methodological Frameworks branch, specifically in Method Selection Frameworks and Meta-Analysis. Its emphasis on consistency-based selection criteria distinguishes it from purely algorithmic contributions and aligns it closely with meta-analytical works like Ranking Result Aggregation[27], which also examines how to evaluate and compare aggregation outcomes. Unlike domain-specific implementations or fairness-focused approaches, this work addresses the fundamental question of method appropriateness, providing guidance for researchers facing the challenge of selecting from an increasingly diverse toolkit of aggregation techniques.

Claimed Contributions

Novel framework for rule picking rules (RPR)

The authors introduce a formal framework for defining rule picking rules that allows designing principled ways of adopting an aggregation rule appropriate for the data, without committing to a set of axioms or a generative model a priori. This framework enables selecting from any set of candidate rules.

10 retrieved papers
Aggregation by Consistency (AbC) method

The authors propose a specific rule picking rule called Aggregation by Consistency that selects the aggregation method maximizing consistency between outputs on random splits of the data. This method is inspired by prior work linking consistency and quality in related settings like peer review, clustering, and AI alignment.

10 retrieved papers
Axiomatic analysis and impossibility results for RPRs

The authors define natural axioms for rule picking rules such as reversal symmetry and plurality-shuffling consistency, prove that AbC satisfies several of these axioms while a wide class of welfare-maximizing RPRs fail them, and establish impossibility results showing certain axioms are incompatible with each other.

10 retrieved papers

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

Novel framework for rule picking rules (RPR)

The authors introduce a formal framework for defining rule picking rules that allows designing principled ways of adopting an aggregation rule appropriate for the data, without committing to a set of axioms or a generative model a priori. This framework enables selecting from any set of candidate rules.

Contribution

Aggregation by Consistency (AbC) method

The authors propose a specific rule picking rule called Aggregation by Consistency that selects the aggregation method maximizing consistency between outputs on random splits of the data. This method is inspired by prior work linking consistency and quality in related settings like peer review, clustering, and AI alignment.

Contribution

Axiomatic analysis and impossibility results for RPRs

The authors define natural axioms for rule picking rules such as reversal symmetry and plurality-shuffling consistency, prove that AbC satisfies several of these axioms while a wide class of welfare-maximizing RPRs fail them, and establish impossibility results showing certain axioms are incompatible with each other.