Unified Privacy Guarantees for Decentralized Learning via Matrix Factorization

ICLR 2026 Conference SubmissionAnonymous Authors
Differential PrivacyDecentralized LearningMatrix MechanismGossip
Abstract:

Decentralized Learning (DL) enables users to collaboratively train models without sharing raw data by iteratively averaging local updates with neighbors in a network graph. This setting is increasingly popular for its scalability and its ability to keep data local under user control. Strong privacy guarantees in DL are typically achieved through Differential Privacy (DP), with results showing that DL can even amplify privacy by disseminating noise across peer-to-peer communications. Yet in practice, the observed privacy-utility trade-off often appears worse than in centralized training, which may be due to limitations in current DP accounting methods for DL. In this paper, we show that recent advances in centralized DP accounting based on Matrix Factorization (MF) for analyzing temporal noise correlations can also be leveraged in DL. By generalizing existing MF results, we show how to cast both standard DL algorithms and common trust models into a unified formulation. This yields tighter privacy accounting for existing DP-DL algorithms and provides a principled way to develop new ones. To demonstrate the approach, we introduce MAFALDA-SGD, a gossip-based DL algorithm with user-level correlated noise that outperforms existing methods on synthetic and real-world graphs.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper contributes a generalized Matrix Factorization (MF) framework for differential privacy accounting in decentralized learning, along with a new algorithm (MAFALDA-SGD) that leverages correlated noise for user-level privacy. It resides in the 'Advanced Composition and Tightness Analysis' leaf, which contains only four papers total, indicating a relatively sparse research direction focused on tighter privacy bounds through novel composition techniques. This positioning suggests the work addresses a specialized but foundational problem: improving how we track cumulative privacy loss in decentralized settings where standard centralized accounting methods may not apply directly.

The taxonomy reveals that the broader 'Privacy Accounting Methods and Frameworks' branch includes neighboring leaves on Shuffle Model Privacy Accounting and Bayesian formulations, while sibling branches address algorithm design, budget allocation, and local DP mechanisms. The paper's focus on MF-based accounting distinguishes it from shuffle-model approaches (which analyze privacy amplification under random permutation) and from Bayesian relaxations. By generalizing MF techniques from centralized DP to decentralized trust models, the work bridges theoretical accounting rigor with the practical challenges of peer-to-peer communication, contrasting with purely algorithmic contributions in the 'Privacy-Preserving Algorithm Design' branch.

Among thirty candidates examined, the contribution-level analysis shows varied novelty. The generalized MF mechanism (Contribution 1) and unified framework (Contribution 2) each examined ten candidates with zero refutations, suggesting these theoretical extensions appear novel within the limited search scope. However, MAFALDA-SGD (Contribution 3) encountered one refutable candidate among ten examined, indicating some overlap with prior work on correlated noise mechanisms in decentralized settings. This pattern suggests the accounting framework itself may be more distinctive than the specific algorithm instantiation, though the search scope remains modest relative to the full literature.

Based on the top-thirty semantic matches and citation expansion, the work appears to occupy a niche intersection of advanced composition theory and decentralized architectures. The limited refutations for the core accounting contributions suggest potential novelty, but the analysis does not cover exhaustive prior work in related areas such as shuffle models or alternative trust assumptions. The single refutation for MAFALDA-SGD highlights that while the accounting framework may be fresh, the algorithmic instantiation builds on established correlated-noise techniques.

Taxonomy

Core-task Taxonomy Papers
50
3
Claimed Contributions
30
Contribution Candidate Papers Compared
1
Refutable Paper

Research Landscape Overview

Core task: Differential privacy accounting for decentralized learning algorithms. The field encompasses a broad spectrum of methods and concerns, organized into nine major branches. Privacy Accounting Methods and Frameworks focuses on developing tighter composition theorems and advanced analysis techniques, such as those explored in Shuffle Model of Differential Privacy[11] and Optimal Accounting of Differential Privacy[29]. Privacy-Preserving Algorithm Design addresses the construction of mechanisms that inject noise or employ cryptographic primitives, while Privacy Budget Allocation and Management examines how to distribute limited privacy resources across rounds or clients, as seen in Adaptive privacy budget allocation[8] and Dynamic privacy budget allocation[14]. Local Differential Privacy in Federated Settings emphasizes client-side noise addition, exemplified by Local differential privacy-based federated[3] and LDP-FL[9]. Meanwhile, Privacy-Utility Trade-off Analysis and Optimization investigates the balance between model accuracy and privacy guarantees, Personalized and Heterogeneous Federated Learning with DP tackles non-IID data and client-specific models, Robustness and Security in DP Federated Learning considers adversarial threats, System Architectures and Infrastructure for Private Decentralized Learning explores practical deployment, and Domain-Specific Applications of DP Decentralized Learning applies these techniques to healthcare, finance, and other sectors. A particularly active line of work centers on refining composition bounds and understanding how privacy degrades over multiple rounds of interaction, with studies like Mitigating Privacy-Utility Trade-off in[27] and Harmonizing Differential Privacy Mechanisms[7] exploring tighter guarantees and adaptive strategies. Another contrasting theme is the tension between local and central trust models: local approaches such as LDP-Fed[36] empower clients to protect their own data, while centralized or shuffled models like Shuffled model of differential[6] leverage aggregation for improved utility. Unified Privacy Guarantees for[0] sits within the Advanced Composition and Tightness Analysis cluster, emphasizing rigorous accounting techniques that unify disparate privacy notions across decentralized settings. Compared to Mitigating Privacy-Utility Trade-off in[5], which prioritizes practical utility optimization, and Optimal Accounting of Differential Privacy[29], which focuses on fundamental composition limits, the original work bridges theoretical tightness with the unique challenges of decentralized architectures, offering a cohesive framework for tracking cumulative privacy loss when multiple parties interact without a trusted central server.

Claimed Contributions

Generalized Matrix Factorization mechanism for broader privacy guarantees

The authors extend the Matrix Factorization (MF) mechanism's differential privacy guarantees to workload matrices that may be rectangular and rank-deficient, allowing adaptivity in decentralized learning. This generalization enables privacy analysis for a broader class of matrices beyond the square, full-rank, lower-triangular matrices required in prior work.

10 retrieved papers
Unified framework for analyzing DP-DL algorithms and trust models via Matrix Factorization

The authors develop a unified formulation that casts both standard decentralized learning algorithms and common trust models (LDP, PNDP, SecLDP) as instances of the Matrix Factorization mechanism. This framework provides tighter privacy accounting for existing DP-DL algorithms and offers a principled approach to designing new ones.

10 retrieved papers
MAFALDA-SGD algorithm with optimized user-level correlated noise

The authors introduce MAFALDA-SGD, a gossip-based decentralized learning algorithm that leverages the unified MF framework to optimize noise correlations for improved privacy-utility trade-offs. The algorithm outperforms existing methods on both synthetic and real-world graphs by exploiting temporal noise correlations within nodes.

10 retrieved papers
Can Refute

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

Generalized Matrix Factorization mechanism for broader privacy guarantees

The authors extend the Matrix Factorization (MF) mechanism's differential privacy guarantees to workload matrices that may be rectangular and rank-deficient, allowing adaptivity in decentralized learning. This generalization enables privacy analysis for a broader class of matrices beyond the square, full-rank, lower-triangular matrices required in prior work.

Contribution

Unified framework for analyzing DP-DL algorithms and trust models via Matrix Factorization

The authors develop a unified formulation that casts both standard decentralized learning algorithms and common trust models (LDP, PNDP, SecLDP) as instances of the Matrix Factorization mechanism. This framework provides tighter privacy accounting for existing DP-DL algorithms and offers a principled approach to designing new ones.

Contribution

MAFALDA-SGD algorithm with optimized user-level correlated noise

The authors introduce MAFALDA-SGD, a gossip-based decentralized learning algorithm that leverages the unified MF framework to optimize noise correlations for improved privacy-utility trade-offs. The algorithm outperforms existing methods on both synthetic and real-world graphs by exploiting temporal noise correlations within nodes.

Unified Privacy Guarantees for Decentralized Learning via Matrix Factorization | Novelty Validation