Gauge-invariant representation holonomy

ICLR 2026 Conference SubmissionAnonymous Authors
Representation learningGauge invarianceHolonomyGeometric deep learningRobustness
Abstract:

Deep networks learn internal representations whose geometry—how features bend, rotate, and evolve—affects both generalization and robustness. Existing similarity measures such as CKA or SVCCA capture pointwise overlap between activation sets, but miss how representations change along input paths. Two models may appear nearly identical under these metrics yet respond very differently to perturbations or adversarial stress. We introduce representation holonomy, a gauge-invariant statistic that measures this path dependence. Conceptually, holonomy quantifies the “twist” accumulated when features are parallel-transported around a small loop in input space: flat representations yield zero holonomy, while nonzero values reveal hidden curvature. Our estimator fixes gauge through global whitening, aligns neighborhoods using shared subspaces and rotation-only Procrustes, and embeds the result back to the full feature space. We prove invariance to orthogonal (and affine, post-whitening) transformations, establish a linear null for affine layers, and show that holonomy vanishes at small radii. Empirically, holonomy scales with loop radius and depth, separates models that appear similar under CKA, and correlates with adversarial and corruption robustness across training regimes. It also tracks training dynamics as features form and stabilize. Together, these results position representation holonomy as a practical and scalable diagnostic for probing the geometric structure of learned representations beyond pointwise similarity.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper introduces representation holonomy, a gauge-invariant statistic measuring path-dependent curvature in neural network feature spaces. It occupies the 'Gauge-Invariant Path-Dependent Curvature Measures' leaf, which contains only this single paper within a taxonomy of 23 works. This isolation suggests the paper pioneers a distinct methodological direction within the broader field of geometric representation analysis, rather than extending a crowded research thread.

The taxonomy reveals neighboring approaches in sibling leaves: 'Manifold Geometry and Topology Preservation' (3 papers) focuses on preserving intrinsic structure during embedding, 'Spectral and Rank-Based Geometry Characterization' (1 paper) uses eigenspectrum properties, and 'Multi-View and Cross-Modal Diffusion Geometry' (1 paper) constructs geometries across data views. The paper diverges by quantifying curvature through parallel transport around loops rather than static manifold properties or spectral signatures, addressing a gap between pointwise similarity metrics and dynamic geometric behavior under perturbations.

Among 28 candidates examined across three contributions, zero refutable pairs emerged. The core holonomy statistic examined 10 candidates with no prior work providing overlapping methodology; the estimator with theoretical guarantees examined 8 candidates similarly; empirical validation examined 10 candidates. This limited search scope—top-K semantic matches plus citations—suggests the specific combination of gauge invariance, parallel transport, and loop-based curvature measurement has not been directly addressed in the retrieved literature, though the analysis cannot claim exhaustive coverage of all geometric representation work.

Given the constrained search and the paper's unique position as the sole occupant of its taxonomy leaf, the holonomy framework appears methodologically distinct within the examined scope. However, the analysis covers approximately 28 papers from semantic neighborhoods, not the entire geometric deep learning literature. The novelty assessment reflects what was retrieved, acknowledging that broader or differently-targeted searches might surface related curvature-based diagnostics not captured here.

Taxonomy

Core-task Taxonomy Papers
23
3
Claimed Contributions
28
Contribution Candidate Papers Compared
0
Refutable Paper

Research Landscape Overview

Core task: measuring path-dependent geometry of learned neural network representations. The field organizes around four main branches that collectively address how neural networks encode and transform information through geometric lenses. The Geometric Structure and Curvature of Representation Spaces branch focuses on intrinsic measures of how representations bend and curve, including gauge-invariant approaches like Gauge Invariant Holonomy[0] and manifold-preserving methods such as Manifold Preserving Latent[4]. Optimization and Training Dynamics examines how learning trajectories shape representation geometry over time, with works like Temporal Training Influence[1] and Neural Hamiltonian Orbits[2] tracking evolution through parameter space. Domain-Specific Path-Dependent Modeling applies geometric insights to specialized contexts ranging from materials science (Granular Material Learning[10], Microstructure Graph Network[12]) to biological systems (Single Cell Differentiation[8]) and spatial data (Trajectory Map Matching[17]). The Theoretical Foundations and Interpretability branch develops formal frameworks for understanding why and how geometric properties emerge, connecting to broader questions of model behavior and generalization. Recent activity reveals tension between domain-agnostic geometric measures and application-driven approaches. A small cluster of works pursues universal curvature metrics that remain invariant under network reparameterizations, while many studies embed domain knowledge directly into path-dependent architectures for materials modeling or biological trajectories. Gauge Invariant Holonomy[0] sits squarely within the gauge-invariant curvature measures, emphasizing mathematical rigor in quantifying how representations twist along learning paths without dependence on arbitrary coordinate choices. This contrasts with nearby efforts like Multiview Diffusion Geometry[5], which leverages multiple observational perspectives to infer latent geometry, or Representation Geometry Tracing[15], which tracks geometric properties dynamically during training. The open question remains whether such pure geometric diagnostics can inform practical interventions in training, or whether they primarily serve as post-hoc interpretability tools for understanding what networks have already learned.

Claimed Contributions

Representation holonomy as a gauge-invariant statistic

The authors propose representation holonomy, a new gauge-invariant measure that quantifies path-dependent changes in learned representations by measuring the accumulated twist when features are parallel-transported around closed loops in input space, revealing hidden curvature beyond pointwise similarity metrics.

10 retrieved papers
Practical estimator with theoretical guarantees

The authors develop a computationally practical estimator that fixes gauge through global whitening, aligns neighborhoods using shared subspaces and rotation-only Procrustes, and embeds results back to full feature space. They prove invariance to orthogonal and affine transformations, establish a linear null for affine layers, and show holonomy vanishes at small radii.

8 retrieved papers
Empirical validation on vision tasks

The authors demonstrate empirically that holonomy increases with loop radius and depth, separates models appearing similar under CKA, tracks training dynamics, and correlates with adversarial and corruption robustness across multiple training regimes including ERM, label smoothing, mixup, and adversarial training.

10 retrieved papers

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Within the taxonomy built over the current TopK core-task papers, the original paper is assigned to a leaf with no direct siblings and no cousin branches under the same grandparent topic. In this retrieved landscape, it appears structurally isolated, which is one partial signal of novelty, but still constrained by search coverage and taxonomy granularity.

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

Representation holonomy as a gauge-invariant statistic

The authors propose representation holonomy, a new gauge-invariant measure that quantifies path-dependent changes in learned representations by measuring the accumulated twist when features are parallel-transported around closed loops in input space, revealing hidden curvature beyond pointwise similarity metrics.

Contribution

Practical estimator with theoretical guarantees

The authors develop a computationally practical estimator that fixes gauge through global whitening, aligns neighborhoods using shared subspaces and rotation-only Procrustes, and embeds results back to full feature space. They prove invariance to orthogonal and affine transformations, establish a linear null for affine layers, and show holonomy vanishes at small radii.

Contribution

Empirical validation on vision tasks

The authors demonstrate empirically that holonomy increases with loop radius and depth, separates models appearing similar under CKA, tracks training dynamics, and correlates with adversarial and corruption robustness across multiple training regimes including ERM, label smoothing, mixup, and adversarial training.