Tensor learning with orthogonal, Lorentz, and symplectic symmetries

ICLR 2026 Conference SubmissionAnonymous Authors
equivariant machine learningtensorsorthogonallorentzsymplectic
Abstract:

Tensors are a fundamental data structure for many scientific contexts, such as time series analysis, materials science, and physics, among many others. Improving our ability to produce and handle tensors is essential to efficiently address problems in these domains. In this paper, we show how to exploit the underlying symmetries of functions that map tensors to tensors. More concretely, we develop universally expressive equivariant machine learning architectures on tensors that exploit that, in many cases, these tensor functions are equivariant with respect to the diagonal action of the orthogonal, Lorentz, and/or symplectic groups. We showcase our results on three problems coming from material science, theoretical computer science, and time series analysis. For time series, we combine our method with the increasingly popular path signatures approach, which is also invariant with respect to reparameterizations. Our numerical experiments show that our equivariant models perform better than corresponding non-equivariant baselines.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Taxonomy

Core-task Taxonomy Papers
50
3
Claimed Contributions
25
Contribution Candidate Papers Compared
3
Refutable Paper

Research Landscape Overview

Core task: Equivariant machine learning on tensors with group symmetries. The field organizes around several complementary branches. Foundational work establishes tensor representations that respect group actions, providing the mathematical backbone for architectures that predict tensor-valued properties such as elasticity or piezoelectric tensors. Equivariant architectures themselves range from early frameworks like Tensor Field Networks[17] to more recent designs that handle higher-order tensors and diverse physical quantities. Computational efficiency remains a central concern, with methods exploring reduced representations and scalable implementations to handle large systems. Beyond the standard SO(3) rotation group, researchers have extended equivariance to broader symmetries including space groups, permutation groups, and relativistic settings. Applications span physical and dynamical systems—molecular Hamiltonians, quantum circuits, turbulence modeling—while specialized domains address emerging challenges in materials science, high-energy physics, and geometric vision tasks. A particularly active line of work focuses on general group equivariance, where methods must accommodate symmetries beyond rotations. Within this branch, Lorentz and relativistic symmetries represent a specialized but growing area. Tensor Learning Symmetries[0] sits squarely in this niche, addressing the challenge of learning with Lorentz-invariant structures. This contrasts with the bulk of the field, which emphasizes SO(3) equivariance for molecular and materials applications, as seen in works predicting elasticity tensors or piezoelectric properties. Lorentz Group Equivariant[33] explores similar relativistic themes, highlighting the trade-offs between expressiveness and computational tractability when moving to higher-dimensional spacetime symmetries. The main open question in this corner of the taxonomy is how to balance the mathematical rigor of relativistic equivariance with the practical demands of scalable learning, especially as applications in high-energy physics and cosmology become more data-intensive.

Claimed Contributions

Universally expressive equivariant architectures for tensor functions

The authors introduce machine learning architectures that can represent tensor-to-tensor functions while respecting equivariance under classical Lie groups (orthogonal, Lorentz, and symplectic groups). These architectures are universally expressive in the sense that they can approximate arbitrary equivariant functions.

8 retrieved papers
General mathematical framework for equivariant tensor learning

The paper provides the first comprehensive mathematical framework for designing equivariant machine learning models on tensors. This framework gives explicit parameterizations for polynomial and analytic functions mapping tensor inputs to tensor outputs that are equivariant with respect to classical Lie groups.

9 retrieved papers
Explicit parameterizations using tensor invariant theory

The authors derive explicit mathematical parameterizations for equivariant functions by leveraging tensor invariant theory. These parameterizations cover polynomial functions and analytic functions, providing practical recipes for implementing equivariant models across multiple classical Lie groups.

8 retrieved papers
Can Refute

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

Universally expressive equivariant architectures for tensor functions

The authors introduce machine learning architectures that can represent tensor-to-tensor functions while respecting equivariance under classical Lie groups (orthogonal, Lorentz, and symplectic groups). These architectures are universally expressive in the sense that they can approximate arbitrary equivariant functions.

Contribution

General mathematical framework for equivariant tensor learning

The paper provides the first comprehensive mathematical framework for designing equivariant machine learning models on tensors. This framework gives explicit parameterizations for polynomial and analytic functions mapping tensor inputs to tensor outputs that are equivariant with respect to classical Lie groups.

Contribution

Explicit parameterizations using tensor invariant theory

The authors derive explicit mathematical parameterizations for equivariant functions by leveraging tensor invariant theory. These parameterizations cover polynomial functions and analytic functions, providing practical recipes for implementing equivariant models across multiple classical Lie groups.