Fast training of accurate physics-informed neural networks without gradient descent

ICLR 2026 Conference SubmissionAnonymous Authors
physics-informed neural networksextreme learning machinesrandom featurespartial differential equationsoptimizationtrainingcausalityneural PDE solversoptimization
Abstract:

Solving time-dependent Partial Differential Equations (PDEs) is one of the most critical problems in computational science. While Physics-Informed Neural Networks (PINNs) offer a promising framework for approximating PDE solutions, their accuracy and training speed are limited by two core barriers: gradient-descent-based iterative optimization over complex loss landscapes and non-causal treatment of time as an extra spatial dimension. We present Frozen-PINN, a novel PINN based on the principle of space-time separation that leverages random features instead of training with gradient descent, and incorporates temporal causality by construction. On nine PDE benchmarks, including challenges like extreme advection speeds, shocks, and high-dimensionality, Frozen-PINNs achieve superior training efficiency and accuracy over state-of-the-art PINNs, often by several orders of magnitude. Our work addresses longstanding training and accuracy bottlenecks of PINNs, delivering quickly trainable, highly accurate, and inherently causal PDE solvers, a combination that prior methods could not realize. Our approach challenges the reliance of PINNs on stochastic gradient-descent-based methods and specialized hardware, leading to a paradigm shift in PINN training and providing a challenging benchmark for the community.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper introduces Frozen-PINN, a method combining space-time separation with random features and temporal causality for solving time-dependent PDEs. It resides in the 'Causality-Aware and Temporal Structure Exploitation' leaf, which contains only four papers total, indicating a relatively sparse research direction within the broader PINN landscape. This leaf sits under 'Physics-Informed Neural Network Architectures and Training Methods', one of the major branches in a taxonomy spanning 50 papers across diverse approaches. The small sibling set suggests this specific combination of causality enforcement and training efficiency remains an active but not yet crowded area.

The taxonomy reveals neighboring leaves addressing related but distinct concerns: 'Recurrent and Sequential PINN Formulations' (2 papers) explores temporal continuity through recurrent architectures, while 'Domain Decomposition and Multi-Scale PINN Strategies' (3 papers) tackles spatiotemporal complexity through partitioning rather than causality. 'Preconditioning and Optimization Enhancements for PINNs' (2 papers) shares the efficiency motivation but focuses on gradient-based optimization improvements. The taxonomy's scope note for the target leaf explicitly excludes methods treating time as a standard spatial dimension, positioning Frozen-PINN's causal approach as a deliberate departure from conventional PINN formulations that dominate other branches.

Among 25 candidates examined, the Frozen-PINN training algorithm (Contribution 1) shows one refutable candidate from 5 examined, while adaptive solution-driven parameters (Contribution 2) also has one refutable candidate from 10 examined. The SVD compression layer (Contribution 3) found no refutable prior work among 10 candidates. These statistics reflect a limited search scope rather than exhaustive coverage: the analysis captures top-K semantic matches and citation expansion, not the entire literature. The training algorithm and adaptive parameters appear to have some precedent in the examined subset, while the compression approach shows less overlap within this sample.

Given the sparse taxonomy leaf and limited search scope, the work appears to occupy a relatively underexplored intersection of causality enforcement and training efficiency. The two refutable pairs among 25 candidates suggest partial overlap with prior efforts, but the small sibling set (4 papers) and narrow search window mean substantial related work may exist outside the examined sample. The assessment reflects what 25 semantically similar papers reveal, not a definitive novelty verdict across all PINN literature.

Taxonomy

Core-task Taxonomy Papers
50
3
Claimed Contributions
25
Contribution Candidate Papers Compared
2
Refutable Paper

Research Landscape Overview

Core task: Solving time-dependent partial differential equations using neural networks. The field has evolved into a rich ecosystem of approaches that can be broadly organized into several major branches. Physics-Informed Neural Network (PINN) architectures and training methods focus on embedding governing equations directly into loss functions, often exploring causality-aware structures and temporal exploitation strategies to improve accuracy and efficiency. Data-driven and operator learning methods, exemplified by works like GraphDeepONet[2], learn solution operators from observations rather than relying solely on equation residuals. PDE discovery and system identification aim to infer unknown governing equations from data, while specialized formulations target particular equation classes such as conservation laws or reaction-diffusion systems. Numerical stability and long-term prediction address the challenge of error accumulation over extended time horizons, and hybrid approaches combine classical numerical solvers with neural components to leverage the strengths of both paradigms. Additional branches cover sampling and generative modeling via PDE transport, general frameworks for time-varying systems, zeroing neural networks for time-varying equations, and domain-specific applications ranging from fluid dynamics to room acoustics. Within the causality-aware and temporal structure exploitation line of work, researchers have pursued various strategies to respect the directional flow of time and improve training efficiency. Causal PINN Estimation[3] and Causality Enhanced Discreted[43] both emphasize enforcing temporal causality to prevent information leakage and enhance solution quality, while Sampled Neural Networks[1] explores efficient sampling strategies during training. Fast PINN Training[0] sits naturally within this cluster, focusing on accelerating the training process by exploiting temporal structure, a concern shared by many PINN practitioners who face high computational costs. Compared to Causal PINN Estimation[3], which prioritizes causality constraints, Fast PINN Training[0] appears to emphasize computational efficiency as a primary goal. Meanwhile, broader efforts in the field such as Spatiotemporal Deep Learning[5] and Neural Time PDE[4] tackle related challenges of capturing complex spatiotemporal dependencies, illustrating the diverse strategies researchers employ to make neural PDE solvers both accurate and practical for real-world time-dependent problems.

Claimed Contributions

Frozen-PINN training algorithm

The authors introduce Frozen-PINN, a novel physics-informed neural network that uses space-time separation, random features instead of gradient descent, and incorporates temporal causality by construction. This approach achieves superior training efficiency and accuracy compared to existing PINNs.

5 retrieved papers
Can Refute
Adaptive solution-driven network parameters

The method extends previous random feature approaches by computing neural network parameters adaptively using solution data from earlier time steps, enabling more efficient self-supervised PDE learning.

10 retrieved papers
Can Refute
SVD layer for model compression

A singular value decomposition layer is added to reduce the dimensionality of the ODE system and improve computational efficiency by orthogonalizing basis functions, achieving significant compression and speedup.

10 retrieved papers

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

Frozen-PINN training algorithm

The authors introduce Frozen-PINN, a novel physics-informed neural network that uses space-time separation, random features instead of gradient descent, and incorporates temporal causality by construction. This approach achieves superior training efficiency and accuracy compared to existing PINNs.

Contribution

Adaptive solution-driven network parameters

The method extends previous random feature approaches by computing neural network parameters adaptively using solution data from earlier time steps, enabling more efficient self-supervised PDE learning.

Contribution

SVD layer for model compression

A singular value decomposition layer is added to reduce the dimensionality of the ODE system and improve computational efficiency by orthogonalizing basis functions, achieving significant compression and speedup.