INO-SGD: Addressing Utility Imbalance under Individualized Differential Privacy

ICLR 2026 Conference SubmissionAnonymous Authors
differential privacyindividualized differential privacyIDP-SGDdata imbalanceutility imbalanceaccuracy disparitycollaborative machine learning
Abstract:

Differential privacy (DP) is widely employed in machine learning to protect confidential or sensitive training data from being revealed. As data owners gain greater control over their data due to personal data ownership, they are more likely to set their own privacy requirements, necessitating individualized DP (IDP) to fulfil such requests. In particular, owners of data from more sensitive subsets, such as positive cases of stigmatized diseases, likely set stronger privacy requirements, as leakage of such data could incur more serious societal impact. However, existing IDP algorithms induce a critical utility imbalance problem: Data from owners with stronger privacy requirements may be severely underrepresented in the trained model, resulting in poorer performance on similar data from subsequent users during deployment. In this paper, we analyze this problem and propose the INO-SGD algorithm, which strategically down-weights data within each batch to improve performance on the more private data across all iterations. Notably, our algorithm is specially designed to satisfy IDP, while existing techniques addressing utility imbalance neither satisfy IDP nor can be easily adapted to do so. Lastly, we demonstrate the empirical feasibility of our approach.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper proposes INO-SGD, an algorithm that addresses utility imbalance arising when users set heterogeneous privacy requirements under individualized differential privacy (IDP). It resides in the 'Utility Imbalance and Subgroup Disparity Analysis' leaf, which contains five papers examining how DP exacerbates accuracy gaps across subgroups. This leaf sits within the broader 'Fairness and Utility Imbalance under Differential Privacy' branch, indicating a moderately populated research direction focused on understanding and mitigating disparate impacts of privacy mechanisms.

The taxonomy reveals that neighboring leaves address related but distinct concerns: 'Joint Fairness and Privacy Optimization in Federated Learning' (six papers) focuses on FL-specific fairness-privacy trade-offs, while 'Fairness-Aware Mechanisms for Centralized DP Models' (nine papers) develops training-time interventions for centralized settings. The 'Individualized and Personalized Privacy Mechanisms' branch (eleven papers across three leaves) explores heterogeneous privacy budgets but does not explicitly target utility imbalance. INO-SGD bridges these areas by proposing a centralized training algorithm that both satisfies IDP and mitigates the resulting utility gaps.

No literature search was conducted for this analysis, so no candidate papers were examined and no refutation statistics are available. The contribution-level analysis shows zero candidates examined for all three contributions: the INO-SGD algorithm, the IDP-induced utility imbalance analysis, and the INO-SGM generalization. Without empirical search results, we cannot assess whether prior work overlaps with these specific algorithmic or analytical contributions. The taxonomy context suggests the problem space is recognized, but the novelty of the proposed solution remains unverified by this limited analysis.

Given the absence of a literature search, this assessment relies solely on taxonomy structure and sibling paper positioning. The paper appears to occupy a recognized but not overcrowded niche at the intersection of individualized privacy and utility imbalance. A full novelty evaluation would require examining the sibling papers and related leaves to determine whether INO-SGD's strategic down-weighting approach or its IDP-specific design represents a substantive advance over existing disparity mitigation techniques.

Taxonomy

Core-task Taxonomy Papers
41
3
Claimed Contributions
0
Contribution Candidate Papers Compared
0
Refutable Paper

Research Landscape Overview

Core task: Addressing utility imbalance under individualized differential privacy. The field has evolved around four main branches that reflect distinct but interconnected concerns. The first branch, Individualized and Personalized Privacy Mechanisms, develops frameworks that allow heterogeneous privacy budgets across users, enabling personalized protection levels as seen in works like Utility-Optimized Local Privacy[14] and Personalized Privacy Federated[6]. The second branch, Fairness and Utility Imbalance under Differential Privacy, examines how privacy mechanisms can inadvertently create disparities in model utility across subgroups, with studies such as Neither Private Nor Fair[5] and FairDP[3] highlighting these tensions. The third branch focuses on Privacy-Utility Trade-offs and Empirical Interactions, exploring how different privacy regimes affect learning performance and convergence, while the fourth branch addresses Domain-Specific and Application-Oriented DP Frameworks, tailoring differential privacy to federated learning, crowdsourcing, and other specialized settings. Recent work has intensified around the interplay between personalized privacy and fairness guarantees, with many studies seeking mechanisms that simultaneously respect individual privacy preferences and ensure equitable utility across demographic groups. INO-SGD[0] sits within the utility imbalance and subgroup disparity analysis cluster, addressing how individualized noise injection can exacerbate performance gaps between subpopulations. Its emphasis on balancing per-user privacy levels with group-level utility contrasts with approaches like Privacy Fairness Post-Processed[1] and Adaptive Utility Optimization[2], which apply post-hoc corrections or adaptive budget allocation to mitigate disparity. Nearby works such as Hash-Induced Unfairness[24] and Privacy at Price[8] further explore how algorithmic choices and economic incentives shape fairness outcomes under personalized privacy, underscoring ongoing debates about whether utility imbalance is an inherent cost of individualization or a design challenge amenable to principled solutions.

Claimed Contributions

INO-SGD algorithm for addressing IDP-induced utility imbalance

The authors introduce the Individualized Noisy Ordered SGD (INO-SGD) algorithm that addresses utility imbalance arising from individualized differential privacy requirements. The algorithm strategically assigns importance scores to gradients based on loss ordering, down-weighting less important gradients while preserving IDP guarantees and improving model performance on data from owners with stronger privacy requirements.

0 retrieved papers
Analysis of IDP-induced utility imbalance problem

The authors identify and theoretically analyze a critical utility imbalance problem in individualized differential privacy settings, showing that data from owners with stronger privacy requirements may be severely underrepresented in trained models. They demonstrate that this problem differs from standard data imbalance and cannot be solved by existing techniques.

0 retrieved papers
INO-SGM mechanism generalizing INO-SGD

The authors develop a generalized individualized differential privacy mechanism called INO-SGM that extends the INO-SGD approach beyond stochastic gradient descent. This mechanism provides a broader framework for applying score-based ordering while maintaining IDP guarantees.

0 retrieved papers

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

INO-SGD algorithm for addressing IDP-induced utility imbalance

The authors introduce the Individualized Noisy Ordered SGD (INO-SGD) algorithm that addresses utility imbalance arising from individualized differential privacy requirements. The algorithm strategically assigns importance scores to gradients based on loss ordering, down-weighting less important gradients while preserving IDP guarantees and improving model performance on data from owners with stronger privacy requirements.

Contribution

Analysis of IDP-induced utility imbalance problem

The authors identify and theoretically analyze a critical utility imbalance problem in individualized differential privacy settings, showing that data from owners with stronger privacy requirements may be severely underrepresented in trained models. They demonstrate that this problem differs from standard data imbalance and cannot be solved by existing techniques.

Contribution

INO-SGM mechanism generalizing INO-SGD

The authors develop a generalized individualized differential privacy mechanism called INO-SGM that extends the INO-SGD approach beyond stochastic gradient descent. This mechanism provides a broader framework for applying score-based ordering while maintaining IDP guarantees.