Beyond Spectra: Eigenvector Overlaps in Loss Geometry

ICLR 2026 Conference SubmissionAnonymous Authors
hessianoverlapeigenvectorgeometryridge regressionnoisefree probabilityalgorithmsCIFARhigh dimensional statisticsgeneralizationcovariate shiftdouble descentmultiple descentrandom matrix theory
Abstract:

Local loss geometry in machine learning is fundamentally a two-operator concept. When only a single loss is considered, geometry is fully summarized by the Hessian spectrum; in practice, however, both training and test losses are relevant, and the resulting geometry depends on their spectra together with the alignment of their eigenspaces. We first establish general foundations for two-loss geometry by formulating a universal local fluctuation law, showing that the expected test-loss increment under small training perturbations is a trace that combines train and test spectral data with a critical additional factor quantifying eigenspace overlap, and by proving a novel transfer law that describes how overlaps transform in response to noise. As a solvable analytical model, we next apply these laws to ridge regression with arbitrary covariate shift, where operator-valued free probability yields asymptotically exact overlap decompositions that reveal overlaps as the natural quantities specifying shift and that resolve the puzzle of multiple descent: peaks are controlled by eigenspace (mis-)alignment rather than by Hessian ill-conditioning alone. Finally, for empirical validation and scalability, we confirm the fluctuation law in multilayer perceptrons, develop novel algorithms based on subspace iteration and kernel polynomial methods to estimate overlap functionals, and apply them to a ResNet-20 trained on CIFAR10, showing that class imbalance reshapes train–test loss geometry via induced misalignment. Together, these results establish overlaps as the critical missing ingredient for understanding local loss geometry, providing both theoretical foundations and scalable estimators for analyzing generalization in modern neural networks.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper develops a two-loss framework for analyzing local loss geometry through spectral properties and eigenspace overlaps between training and test Hessians. It resides in the 'Universal Laws for Train-Test Loss Interaction' leaf, which currently contains only this work as its sole member. This positioning reflects a sparse research direction within the broader theoretical foundations branch, suggesting the paper addresses a relatively unexplored formalization of multi-operator loss geometry compared to adjacent areas like nonlinear feature map theory or asymptotic learning under distributional mismatch.

The taxonomy reveals neighboring theoretical work in spiked covariance models and asymptotic learning theory, both examining spectral structure but through different lenses—nonlinear feature propagation and distributional assumptions respectively. The paper's emphasis on universal fluctuation and transfer laws distinguishes it from these sibling branches by focusing on general operator-algebraic relationships rather than model-specific derivations. Parallel branches on optimization and empirical analysis explore eigenspace control and landscape visualization, providing complementary perspectives that manipulate or measure what this work characterizes theoretically.

Among eighteen candidates examined across three contributions, none were identified as clearly refuting the proposed framework. The two-loss geometry formulation examined ten candidates with zero refutations, the universal laws examined one candidate with zero refutations, and the scalable algorithms examined seven candidates with zero refutations. This limited search scope suggests the specific combination of spectral data with eigenspace overlap quantification, formalized through universal laws, appears distinct within the examined literature, though the small candidate pool (particularly one candidate for the core theoretical laws) limits confidence in comprehensiveness.

Based on top-eighteen semantic matches, the work appears to occupy a novel theoretical niche formalizing train-test eigenspace interactions through universal laws. The sparse taxonomy leaf and absence of refuting candidates suggest originality, though the limited search scale—especially the single candidate examined for the central fluctuation and transfer laws—means potentially relevant prior work in operator theory or random matrix methods may exist beyond this scope.

Taxonomy

Core-task Taxonomy Papers
7
3
Claimed Contributions
18
Contribution Candidate Papers Compared
0
Refutable Paper

Research Landscape Overview

Core task: eigenvector overlaps in train-test loss geometry. This field investigates how the geometric structure of training and test loss surfaces—particularly their Hessian eigenspaces—interact to govern generalization. The taxonomy organizes work into four main branches: theoretical foundations that derive universal laws for multi-operator loss interactions, optimization and regularization methods that explicitly control eigenspace properties, empirical studies that measure and visualize loss landscape geometry across architectures, and domain-specific applications that leverage eigenspace analysis for tasks such as metric learning or continual learning. Theoretical Foundations of Multi-Operator Loss Geometry, where Eigenvector Overlaps[0] resides, focuses on deriving rigorous relationships between train and test Hessians, often using random matrix theory or asymptotic analysis. Works like Spiked Covariance Propagation[1] and Learning Asymptotics[7] exemplify this branch by characterizing how spectral structure propagates through layers or emerges in high-dimensional limits. Meanwhile, branches on optimization and empirical analysis explore how eigenvalue regularization (e.g., Eigenvalue Regularization SAM[2]) or curvature measurements (e.g., Loss Landscape Curvature[6]) can be used to flatten loss surfaces and improve generalization. A central theme across these branches is understanding the trade-off between sharpness in the training loss and alignment of train-test eigenspaces. Some studies emphasize large flat regions (Large Geometric Vicinity[3]) as proxies for generalization, while others probe finer spectral overlaps to predict test performance. Eigenvector Overlaps[0] sits squarely within the theoretical foundations branch, focusing on universal laws that quantify how eigenvector alignment between train and test Hessians influences generalization gaps. Its emphasis on rigorous multi-operator interaction contrasts with more empirical works like Loss Landscapes Generalization[4], which catalog landscape features across datasets, and with application-driven studies such as Rethinking Metric Learning[5], which apply eigenspace insights to embedding spaces. By deriving universal overlap laws, Eigenvector Overlaps[0] provides a principled lens for interpreting the geometric underpinnings of generalization, complementing both optimization-focused and measurement-focused lines of inquiry.

Claimed Contributions

Two-loss framework for local loss geometry incorporating spectra and overlaps

The authors propose a framework that characterizes local loss geometry using both training and test losses, showing that geometry depends not only on Hessian spectra but critically on eigenvector overlaps between train and test Hessians. This corrects the common practice of treating spectra alone as sufficient for understanding loss geometry.

10 retrieved papers
Universal local fluctuation law and overlap transfer law

The authors establish two fundamental theoretical results: a fluctuation law (Theorem 1) expressing expected test loss increment as a trace combining train/test spectra with eigenvector overlaps, and a transfer law (Theorem 2) describing how overlaps transform under noise using free probability theory.

1 retrieved paper
Scalable algorithms for estimating Hessian eigenvector overlaps

The authors introduce computational methods combining subspace iteration for outlier eigenspaces and a generalized kernel polynomial method for bulk eigenspaces, enabling efficient estimation of overlap functions between pairs of Hessians in networks with millions of parameters without forming matrices explicitly.

7 retrieved papers

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Within the taxonomy built over the current TopK core-task papers, the original paper is assigned to a leaf with no direct siblings and no cousin branches under the same grandparent topic. In this retrieved landscape, it appears structurally isolated, which is one partial signal of novelty, but still constrained by search coverage and taxonomy granularity.

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

Two-loss framework for local loss geometry incorporating spectra and overlaps

The authors propose a framework that characterizes local loss geometry using both training and test losses, showing that geometry depends not only on Hessian spectra but critically on eigenvector overlaps between train and test Hessians. This corrects the common practice of treating spectra alone as sufficient for understanding loss geometry.

Contribution

Universal local fluctuation law and overlap transfer law

The authors establish two fundamental theoretical results: a fluctuation law (Theorem 1) expressing expected test loss increment as a trace combining train/test spectra with eigenvector overlaps, and a transfer law (Theorem 2) describing how overlaps transform under noise using free probability theory.

Contribution

Scalable algorithms for estimating Hessian eigenvector overlaps

The authors introduce computational methods combining subspace iteration for outlier eigenspaces and a generalized kernel polynomial method for bulk eigenspaces, enabling efficient estimation of overlap functions between pairs of Hessians in networks with millions of parameters without forming matrices explicitly.

Beyond Spectra: Eigenvector Overlaps in Loss Geometry | Novelty Validation