Protection against Source Inference Attacks in Federated Learning
Overview
Overall Novelty Assessment
The paper proposes a defense against source inference attacks in federated learning using parameter-level shuffling combined with the residue number system. It resides in the 'Source and Membership Inference Attacks' leaf under 'Privacy Attack Characterization and Threat Modeling', which contains five papers total. This leaf represents a moderately populated research direction within the broader taxonomy of 50 papers across approximately 36 topics. The sibling papers in this leaf focus on characterizing inference threats rather than proposing defenses, suggesting the paper bridges attack analysis with mitigation strategies.
The taxonomy reveals that defense mechanisms occupy a separate major branch with four distinct leaves covering cryptographic approaches, differential privacy, model-centric defenses, and attack detection. The paper's shuffle-based defense naturally connects to the 'Cryptographic and Shuffling-Based Defenses' leaf, which contains five papers exploring secure aggregation and encoding schemes. The scope notes clarify that while the paper sits taxonomically among attack characterization works, its defensive contribution positions it at the boundary between threat modeling and mitigation strategies, potentially explaining why it appears somewhat isolated from its immediate siblings.
Among the three contributions analyzed, the first (reconstruction attacks against standard shuffling) examined 10 candidates with zero refutations, suggesting relative novelty in demonstrating shuffling vulnerabilities. The second contribution (robust defense in shuffle model) examined 9 candidates and found 2 refutable matches, indicating more substantial prior work in shuffle-based defenses. The third contribution (experimental validation) examined 10 candidates with no refutations. These statistics reflect a limited search scope of 29 total candidates examined, not an exhaustive literature review, meaning the analysis captures top semantic matches rather than comprehensive field coverage.
Based on the limited search scope, the work appears to occupy a niche intersection between attack demonstration and defense design within shuffle-based federated learning. The taxonomy structure suggests this specific combination of parameter-level shuffling with residue number systems may be relatively unexplored, though the broader shuffle defense paradigm has established precedents. The analysis acknowledges uncertainty inherent in examining only 29 candidates from a field of 50 surveyed papers, leaving open questions about related work in adjacent cryptographic or encoding-based defense approaches.
Taxonomy
Research Landscape Overview
Claimed Contributions
The authors introduce reconstruction algorithms for three shuffling granularities (model-level, layer-level, and parameter-level) that enable source inference attacks within the shuffle model of FL. These attacks demonstrate that standard shuffling alone is insufficient to protect against SIAs.
The authors present a defense mechanism that combines parameter-level shuffling with the residue number system (RNS) and unary encoding. This approach reduces SIA accuracy to random guessing without affecting joint model accuracy and can be seamlessly integrated into existing shuffle mechanisms.
The authors provide empirical evaluation demonstrating that standard shuffling approaches fail to prevent SIAs, while their proposed method successfully reduces attack accuracy to the level of random guessing across various datasets and model architectures.
Core Task Comparisons
Comparisons with papers in the same taxonomy category
[1] Subject data auditing via source inference attack in cross-silo federated learning PDF
[6] Cmi: Client-targeted membership inference in federated learning PDF
[7] A privacy preserving framework for federated learning in smart healthcare systems PDF
[13] Interaction-level Membership Inference Attack Against Federated Recommender Systems PDF
Contribution Analysis
Detailed comparisons for each claimed contribution
Novel reconstruction attacks against standard shuffling in federated learning
The authors introduce reconstruction algorithms for three shuffling granularities (model-level, layer-level, and parameter-level) that enable source inference attacks within the shuffle model of FL. These attacks demonstrate that standard shuffling alone is insufficient to protect against SIAs.
[66] {SoK}: Gradient Inversion Attacks in Federated Learning PDF
[67] Privacy in federated learning PDF
[68] Inverting Gradients -- How easy is it to break privacy in federated learning? PDF
[69] GTV: Generating Tabular Data via Vertical Federated Learning PDF
[70] Scale-mia: A scalable model inversion attack against secure federated learning via latent space reconstruction PDF
[71] Agic: Approximate gradient inversion attack on federated learning PDF
[72] Tazza: Shuffling Neural Network Parameters for Secure and Private Federated Learning PDF
[73] Revamping Federated Learning Security from a Defender's Perspective: A Unified Defense with Homomorphic Encrypted Data Space PDF
[74] Towards accurate and stronger local differential privacy for federated learning with staircase randomized response PDF
[75] Separation of Powers in Federated Learning (Poster Paper) PDF
First robust defense against source inference attacks in the shuffle model
The authors present a defense mechanism that combines parameter-level shuffling with the residue number system (RNS) and unary encoding. This approach reduces SIA accuracy to random guessing without affecting joint model accuracy and can be seamlessly integrated into existing shuffle mechanisms.
[33] Protection against Source Inference Attacks in Federated Learning using Unary Encoding and Shuffling PDF
[39] Poster: Protection against Source Inference Attacks in Federated Learning using Unary Encoding and Shuffling PDF
[36] MixNN: Protection of Federated Learning Against Inference Attacks by Mixing Neural Network Layers PDF
[56] When federated learning meets medical image analysis: A systematic review with challenges and solutions PDF
[61] RAFLS: RDP-based adaptive federated learning with shuffle model PDF
[62] Model Fragmentation, Shuffle and Aggregation to Mitigate Model Inversion in Federated Learning PDF
[63] Secure Federated Matrix Factorization via Device-to-Device Model Shuffling PDF
[64] Advanced Probabilistic Methods for Privacy Amplification: Cooperative and Non-Cooperative Approaches PDF
[65] Secure Federated Matrix Factorization via Shuffling Encrypted Parameters Between Devices PDF
Experimental validation across multiple models and datasets
The authors provide empirical evaluation demonstrating that standard shuffling approaches fail to prevent SIAs, while their proposed method successfully reduces attack accuracy to the level of random guessing across various datasets and model architectures.