Deep Learning for Subspace Regression

ICLR 2026 Conference SubmissionAnonymous Authors
grassmannianregressionsubspace regressionsupervised learningROMPODoptimal controlbalanced truncationparametric PDEseigenproblemsdeflated conjugate gadientcoarse grid correction
Abstract:

It is often possible to perform reduced order modelling by specifying linear subspace which accurately captures the dynamics of the system. This approach becomes especially appealing when linear subspace explicitly depends on parameters of the problem. A practical way to apply such a scheme is to compute subspaces for a selected set of parameters in the computationally demanding offline stage and in the online stage approximate subspace for unknown parameters by interpolation. For realistic problems the space of parameters is high dimensional, which renders classical interpolation strategies infeasible or unreliable. We propose to relax the interpolation problem to regression, introduce several loss functions suitable for subspace data, and use a neural network as an approximation to high-dimensional target function. To further simplify a learning problem we introduce redundancy: in place of predicting subspace of a given dimension we predict larger subspace. We show theoretically that this strategy decreases the complexity of the mapping for elliptic eigenproblems with constant coefficients and makes the mapping smoother for general smooth function on the Grassmann manifold. Empirical results also show that accuracy significantly improves when larger-than-needed subspaces are predicted. With the set of numerical illustrations we demonstrate that subspace regression can be useful for a range of tasks including parametric eigenproblems, deflation techniques, relaxation methods, optimal control and solution of parametric partial differential equations.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Taxonomy

Core-task Taxonomy Papers
50
3
Claimed Contributions
30
Contribution Candidate Papers Compared
1
Refutable Paper

Research Landscape Overview

Core task: regression of parametric linear subspaces for reduced order modeling. This field addresses the challenge of efficiently approximating high-dimensional parametric systems by learning low-dimensional subspaces that vary smoothly with parameters. The taxonomy reveals five main branches. Subspace Construction and Basis Representation Methods focus on building effective low-dimensional bases, often through proper orthogonal decomposition or Krylov techniques (Krylov Subspace Techniques[21]). Subspace Interpolation and Regression Techniques develop strategies to predict or interpolate these bases across parameter domains, including classical manifold interpolation (Online Interpolating Models[13]) and modern machine learning approaches (Gaussian Process Subspace[17], Gradient-Free Active Subspace[27]). Parametric Model Order Reduction Frameworks provide overarching methodologies such as moment matching (Moment Matching Reduction[43]) and operator inference (Operator Inference Reduction[40]). Application-Specific Parametric ROM tailors these ideas to domains like fluid dynamics (Fluid Dynamics Challenges[12]), electromagnetics (Physics-aware Electromagnetic Reduction[3]), and structural mechanics. Specialized Techniques and Extensions explore advanced topics including tensor decompositions (Tensorial Parametric Reduction[1]) and neural network-based encoders (Convolutional Autoencoders Reduction[31]). A particularly active line of work involves machine learning-based subspace regression, where researchers leverage Gaussian processes (Gaussian Subspace Regression[36], Electromagnetic Gaussian Regression[50]) or deep learning to predict parameter-dependent bases without exhaustive precomputation. Deep Subspace Regression[0] sits squarely within this branch, emphasizing neural architectures to regress subspaces directly from parameter inputs. This contrasts with neighboring efforts like Gradient-Free Active Subspace[27], which identifies influential parameter directions without gradient information, and Radiation Transport Reduction[47], which applies subspace methods to challenging transport problems. The main trade-offs center on expressiveness versus computational cost: deep models can capture complex parameter dependencies but require substantial training data, while classical interpolation schemes (Parametric Matrix Interpolation[37]) offer theoretical guarantees at the expense of scalability. Open questions include how to verify subspace quality (Subspace Verification[4]) and how to balance offline training expense against online speedup across diverse applications.

Claimed Contributions

Mathematical formulation of subspace regression problem

The authors formulate subspace regression as a statistical learning problem where a parametric model maps parameters to linear subspaces on the Grassmann manifold. They demonstrate applications across eigenproblems, reduced order modeling, iterative methods, and optimal control.

10 retrieved papers
Can Refute
Loss functions for subspace data suitable for neural network training

The authors introduce multiple loss functions that satisfy invariance requirements for Grassmannian data, including a stochastic variant based on least squares that scales better computationally as subspace dimensions increase.

10 retrieved papers
Subspace embedding technique with theoretical justification

The authors propose predicting larger-than-needed subspaces to simplify the learning problem. They provide theoretical justification showing this reduces mapping complexity for elliptic eigenproblems and makes mappings smoother on the Grassmann manifold, which aligns with neural network inductive biases.

10 retrieved papers

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

Mathematical formulation of subspace regression problem

The authors formulate subspace regression as a statistical learning problem where a parametric model maps parameters to linear subspaces on the Grassmann manifold. They demonstrate applications across eigenproblems, reduced order modeling, iterative methods, and optimal control.

Contribution

Loss functions for subspace data suitable for neural network training

The authors introduce multiple loss functions that satisfy invariance requirements for Grassmannian data, including a stochastic variant based on least squares that scales better computationally as subspace dimensions increase.

Contribution

Subspace embedding technique with theoretical justification

The authors propose predicting larger-than-needed subspaces to simplify the learning problem. They provide theoretical justification showing this reduces mapping complexity for elliptic eigenproblems and makes mappings smoother on the Grassmann manifold, which aligns with neural network inductive biases.