Robust Federated Inference
Overview
Overall Novelty Assessment
The paper formalizes robust federated inference, where a central server aggregates predictions from distributed models without accessing local data or model parameters. It sits in the 'Robust Forecast Aggregation' leaf under 'Robust Statistical Aggregation for Distributed Data', which contains only two papers total. This is a notably sparse research direction compared to the densely populated 'Byzantine-Robust Aggregation in Federated Learning' branch (over 20 papers across six leaves). The work's focus on inference rather than training distinguishes it from most federated learning literature, positioning it at the intersection of statistical aggregation and adversarial robustness.
The taxonomy reveals that neighboring branches address related but distinct problems. The 'Byzantine-Robust Aggregation in Federated Learning' subtree emphasizes defending iterative training against malicious updates using geometric median or trimmed mean rules, while 'Multi-Agent and Cyber-Physical Systems' focuses on consensus and control under attacks. The paper's sibling work on forecast aggregation (one other paper in the same leaf) addresses algorithmic frameworks for combining forecasts with minimal regret. The scope note for this leaf explicitly excludes federated learning prediction aggregation, suggesting the paper bridges a gap between statistical forecast combination and adversarial federated settings.
Among 30 candidates examined, none clearly refute the three main contributions. The formalization and robustness analysis of federated inference examined 10 candidates with no refutable overlaps. Casting robust federated inference as adversarial machine learning also examined 10 candidates without clear prior work. The DeepSet aggregator composition similarly found no refutable candidates among 10 examined. This suggests that within the limited search scope, the specific combination of federated inference formalization, adversarial framing, and DeepSet-based robust aggregation appears relatively unexplored, though the search scale (30 papers) leaves open the possibility of relevant work outside the top semantic matches.
The analysis indicates the paper occupies a sparse research niche, bridging statistical aggregation and adversarial federated learning. However, the limited search scope (30 candidates from semantic search) means this assessment reflects only the most semantically similar work, not an exhaustive field survey. The absence of refutable candidates may reflect genuine novelty in combining these specific elements, or may indicate that relevant prior work uses different terminology or appears in adjacent research communities not captured by the search strategy.
Taxonomy
Research Landscape Overview
Claimed Contributions
The authors formally define the robust federated inference problem where up to f clients can return arbitrarily corrupted probits, and provide theoretical analysis showing that the aggregator error depends on the fraction of corruptions, margin between top classes, and dissimilarity between honest responses.
The authors reformulate robust federated inference with non-linear aggregators as an adversarial learning problem over probit-vectors, enabling the application of adversarial training techniques to improve robustness.
The authors propose a DeepSet-based aggregator that combines adversarial training with test-time robust averaging (CWTM). This composition leverages permutation invariance to reduce computational complexity and achieves significant accuracy improvements over existing methods.
Core Task Comparisons
Comparisons with papers in the same taxonomy category
[34] Algorithmic robust forecast aggregation PDF
Contribution Analysis
Detailed comparisons for each claimed contribution
Formalization and robustness analysis of federated inference
The authors formally define the robust federated inference problem where up to f clients can return arbitrarily corrupted probits, and provide theoretical analysis showing that the aggregator error depends on the fraction of corruptions, margin between top classes, and dissimilarity between honest responses.
[1] Robust Aggregation for Federated Learning PDF
[71] A robust privacy-preserving federated learning model against model poisoning attacks PDF
[72] Crfl: Certifiably robust federated learning against backdoor attacks PDF
[73] Towards Trustworthy Federated Learning with Untrusted Participants PDF
[74] Mitigating Poisoning Attacks in Federated Learning Through Deep One-Class Classification PDF
[75] MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients PDF
[76] FedGT: Identification of malicious clients in federated learning with secure aggregation PDF
[77] Threats to federated learning: A survey PDF
[78] Auto-weighted robust federated learning with corrupted data sources PDF
[79] FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping PDF
Casting robust federated inference as adversarial machine learning
The authors reformulate robust federated inference with non-linear aggregators as an adversarial learning problem over probit-vectors, enabling the application of adversarial training techniques to improve robustness.
[51] A Systematic Literature Review of Robust Federated Learning: Issues, Solutions, and Future Research Directions PDF
[52] Federated Learning: Countering Label Flipping Attacks in Retinal OCT PDF
[53] Byzantine Robust Aggregation in Federated Distillation with Adversaries PDF
[54] PSIS-based blind watermarking scheme (PSISBW) with tamper detection PDF
[55] Robustness, Efficiency, or Privacy: Pick Two in Machine Learning PDF
[56] A fully decentralized privacy-enabled federated learning system PDF
[57] Secure and Accountable Collaborative Learning PDF
[58] Nonlinear Adaptive Federated Learning with Privacy Preservation for Edge-Cloud Systems PDF
[59] Using a lightweight and efficient deep learning network to perform accurate microalgae spectral classification PDF
[60] SVAFD: A Secure and Verifiable Co-Aggregation Protocol for Federated Distillation PDF
Robust DeepSet aggregator with novel composition
The authors propose a DeepSet-based aggregator that combines adversarial training with test-time robust averaging (CWTM). This composition leverages permutation invariance to reduce computational complexity and achieves significant accuracy improvements over existing methods.