Batch and Sequential Unlearning for Neural Networks
Overview
Overall Novelty Assessment
This paper contributes two unlearning algorithms (CuReNU and StoCuReNU) that apply cubic regularization to Newton's method for handling degenerate Hessians in neural network unlearning. It sits in the 'Cubic Regularization for Degenerate Hessians' leaf of the taxonomy, which currently contains only this work as its sole member. This placement indicates a sparse research direction within the broader Hessian-Based Newton Methods branch, suggesting the specific technique of cubic regularization for unlearning represents a relatively unexplored approach compared to neighboring areas.
The taxonomy reveals that adjacent leaves contain related but distinct techniques: 'Standard Newton Updates' includes direct Hessian inversion approaches, while 'Hessian Inverse Approximation Techniques' encompasses low-rank updates and conjugate gradient methods. The parent branch also contains Hessian-Free Approaches using randomized approximations or Gauss-Newton formulations. The scope notes clarify that methods avoiding Hessian computation belong elsewhere, while this work explicitly addresses Hessian degeneracy through regularization. This positioning suggests the paper bridges standard Newton methods with practical challenges arising from ill-conditioned curvature geometry.
Among the three identified contributions, the literature search examined 21 candidates total. The first contribution (identifying Hessian degeneracy) examined 10 candidates with 1 appearing to provide overlapping prior work. The second contribution (CuReNU/StoCuReNU algorithms) examined 1 candidate that can refute novelty. The third contribution (scalable Hessian-free implementation) examined 10 candidates with 2 potentially refuting. These statistics indicate limited search scope focused on semantic neighbors rather than exhaustive coverage. The algorithmic contribution appears most vulnerable to prior work overlap, while the degeneracy analysis and scalability aspects show more novelty among examined candidates.
Based on examination of 21 semantically similar papers, the work appears to occupy a relatively sparse position within second-order unlearning methods, though the limited search scope prevents definitive assessment. The taxonomy structure suggests cubic regularization represents an underexplored direction compared to standard Newton updates or Hessian-free alternatives. However, the refutable pairs identified indicate that key elements—particularly the algorithmic framework and implementation strategies—may have meaningful overlap with existing techniques in the broader second-order optimization landscape.
Taxonomy
Research Landscape Overview
Claimed Contributions
The authors demonstrate that Hessian degeneracy (many zero and near-zero eigenvalues) is a fundamental but often-overlooked problem in Newton unlearning for neural networks. They show that common baselines like pseudo-inverse and damping fail to address this issue effectively.
The authors introduce two novel unlearning algorithms that automatically determine the optimal damping factor for Newton unlearning using cubic regularization. CuReNU and StoCuReNU provide convergence guarantees to epsilon-second-order stationary points, addressing the Hessian degeneracy problem.
The authors develop StoCuReNU as a scalable variant that uses Hessian-vector products instead of explicit Hessian storage, achieving constant memory usage of O(2d) compared to O(dn) in existing Hessian-free methods, while avoiding approximation errors.
Core Task Comparisons
Comparisons with papers in the same taxonomy category
Contribution Analysis
Detailed comparisons for each claimed contribution
Identification of Hessian degeneracy as a fundamental issue in Newton unlearning for neural networks
The authors demonstrate that Hessian degeneracy (many zero and near-zero eigenvalues) is a fundamental but often-overlooked problem in Newton unlearning for neural networks. They show that common baselines like pseudo-inverse and damping fail to address this issue effectively.
[28] On Newton's Method to Unlearn Neural Networks PDF
[6] Langevin unlearning: A new perspective of noisy gradient descent for machine unlearning PDF
[10] Certified minimax unlearning with generalization rates and deletion capacity PDF
[11] Muter: Machine unlearning on adversarially trained models PDF
[14] Deep Unlearning via Randomized Conditionally Independent Hessians PDF
[17] A unified gradient-based framework for task-agnostic continual learning-unlearning PDF
[19] Second-order information matters: Revisiting machine unlearning for large language models PDF
[24] Unified gradient-based machine unlearning with remain geometry enhancement PDF
[56] A second-order perspective on model compositionality and incremental learning PDF
[57] Sharpness-Aware Parameter Selection for Machine Unlearning PDF
CuReNU and StoCuReNU unlearning algorithms based on cubic regularization
The authors introduce two novel unlearning algorithms that automatically determine the optimal damping factor for Newton unlearning using cubic regularization. CuReNU and StoCuReNU provide convergence guarantees to epsilon-second-order stationary points, addressing the Hessian degeneracy problem.
[28] On Newton's Method to Unlearn Neural Networks PDF
Scalable Hessian-free implementation with constant memory usage
The authors develop StoCuReNU as a scalable variant that uses Hessian-vector products instead of explicit Hessian storage, achieving constant memory usage of O(2d) compared to O(dn) in existing Hessian-free methods, while avoiding approximation errors.