The False Promise of Zero-Shot Super-Resolution in Machine-Learned Operators

ICLR 2026 Conference SubmissionAnonymous Authors
Partial differential equationsNeural operatorssuper-resolutionzero-shot super-resolutionmulti-resolution training
Abstract:

A core challenge in scientific machine learning, and scientific computing more generally, is modeling continuous phenomena which (in practice) are represented discretely. Machine-learned operators (MLO) have been introduced as a means to achieve this modeling goal, as this class of architecture can perform inference at arbitrary resolution. In this work, we evaluate whether this architectural innovation is sufficient to perform “zero-shot super-resolution,” namely to enable a model to serve inference on higher-resolution data than that on which it was originally trained. We comprehensively evaluate both zero-shot sub-resolution and super-resolution (i.e., multi-resolution) inference in MLOs. We decouple multi-resolution inference into two key behaviors: 1) extrapolation to varying frequency information; and 2) interpolating across varying resolutions. We empirically demonstrate that MLOs fail to do both of these tasks in a zero-shot manner. Consequently, we find MLOs are not able to perform accurate inference at resolutions different from those on which they were trained, and instead they are brittle and susceptible to aliasing. To address these failure modes, we propose a simple, computationally-efficient, and data-driven multi-resolution training protocol that overcomes aliasing and that provides robust multi-resolution generalization.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

This paper evaluates zero-shot multi-resolution inference capabilities in machine-learned operators, specifically examining whether MLOs can perform accurate inference at resolutions different from training data. It resides in the Methodological Foundations and Comparative Studies leaf, which contains only three papers total. This sparse branch focuses on surveys, reviews, and critical analyses rather than novel method proposals, positioning the work as a foundational or evaluative contribution examining the practical limitations of zero-shot claims in operator learning rather than introducing new architectures.

The taxonomy reveals substantial activity in neighboring branches. Neural Operator Architectures for PDE Solution Learning contains six papers exploring physics-informed and data-driven variants, while Zero-Shot Super-Resolution and Multi-Resolution Inference encompasses eleven papers across image, video, and scientific domains. The paper's critical stance on zero-shot capabilities distinguishes it from these method-focused branches. Its sibling papers include PDE Methods Review and Neural Operators Function, suggesting alignment with theoretical or comparative analysis rather than empirical method development, though the scope and exclude notes indicate clear boundaries separating foundational studies from specific architectural innovations.

Among eighteen candidates examined across three contributions, zero refutable pairs were identified. The first contribution examining zero-shot multi-resolution inference evaluated eight candidates with none providing clear refutation. The second contribution on physics-informed constraints examined zero candidates. The third contribution proposing multi-resolution training protocols examined ten candidates, again with no refutations found. This limited search scope suggests the specific framing—comprehensive evaluation of MLO failure modes in zero-shot settings—may occupy a relatively unexplored niche, though the small candidate pool prevents definitive conclusions about novelty across the broader literature.

Based on top-eighteen semantic matches, the work appears to address a gap between theoretical operator learning capabilities and practical multi-resolution generalization. The absence of refuting prior work within this limited scope, combined with placement in a sparse foundational branch, suggests the critical evaluation angle may be relatively novel. However, the small search scale and concentration in methodological rather than empirical branches limit confidence in assessing novelty against the full landscape of neural operator and super-resolution research.

Taxonomy

Core-task Taxonomy Papers
50
3
Claimed Contributions
18
Contribution Candidate Papers Compared
0
Refutable Paper

Research Landscape Overview

Core task: zero-shot multi-resolution inference in machine-learned operators. This field addresses the challenge of training models that can generalize across different spatial or temporal resolutions without requiring retraining or fine-tuning at each new scale. The taxonomy reveals a diverse landscape organized into six main branches. Neural Operator Architectures for PDE Solution Learning focuses on methods like Fourier Neural Operators[41] and Physics-Informed Neural Operator[1] that learn solution mappings for partial differential equations. Zero-Shot Super-Resolution and Multi-Resolution Inference encompasses techniques such as Deep Internal Learning[22] and Zero-Shot Video Super-Resolution[12] that perform resolution enhancement without paired training data. Multi-Resolution and Multi-Scale Feature Learning includes works like Multi-Scale Implicit Transformer[29] and MResT[47] that extract hierarchical representations. Zero-Shot Learning in Vision and Cross-Modal Tasks covers broader generalization challenges exemplified by MVSAnywhere[3] and Real Zero-Shot Camouflage[8]. Domain-Specific Multi-Scale and Zero-Shot Applications targets specialized areas including Weather Downscaling Operators[19] and Aerospace Composite Curing[11]. Finally, Methodological Foundations and Comparative Studies provides theoretical grounding through reviews like PDE Methods Review[10] and Neural Operators Function[49]. Several active research directions reveal fundamental trade-offs between architectural complexity, generalization capability, and computational efficiency. Works in neural operators emphasize learning continuous mappings that naturally handle resolution changes, while zero-shot super-resolution methods often exploit internal statistics or meta-learning strategies to adapt without external supervision. False Promise Zero-Shot[0] sits within the Methodological Foundations branch alongside PDE Methods Review[10] and Neural Operators Function[49], suggesting a critical or analytical perspective on zero-shot claims in operator learning. Compared to Neural Operators Function[49], which likely surveys theoretical properties of operator networks, False Promise Zero-Shot[0] appears to examine the practical limitations or overstated capabilities of zero-shot multi-resolution inference, potentially questioning whether current methods truly achieve resolution-independent generalization or merely interpolate within learned scales.

Claimed Contributions

Comprehensive evaluation of zero-shot multi-resolution inference in MLOs

The authors systematically evaluate whether machine-learned operators can perform accurate inference at resolutions different from their training resolution. They decouple this task into resolution interpolation and frequency extrapolation, empirically demonstrating that MLOs fail at both and exhibit aliasing artifacts.

8 retrieved papers
Evaluation of physics-informed constraints and band-limited learning approaches

The authors assess two previously proposed solutions for multi-resolution inference: physics-informed optimization objectives and band-limited learning methods (CNO and CROP). They show that neither approach reliably enables accurate zero-shot multi-resolution generalization.

0 retrieved papers
Multi-resolution training protocol for robust generalization

The authors propose a simple data-driven training approach that includes data from multiple resolutions. They demonstrate that models can achieve robust multi-resolution inference by training primarily on inexpensive low-resolution data with small amounts of high-resolution data, maintaining computational efficiency while improving generalization.

10 retrieved papers

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

Comprehensive evaluation of zero-shot multi-resolution inference in MLOs

The authors systematically evaluate whether machine-learned operators can perform accurate inference at resolutions different from their training resolution. They decouple this task into resolution interpolation and frequency extrapolation, empirically demonstrating that MLOs fail at both and exhibit aliasing artifacts.

Contribution

Evaluation of physics-informed constraints and band-limited learning approaches

The authors assess two previously proposed solutions for multi-resolution inference: physics-informed optimization objectives and band-limited learning methods (CNO and CROP). They show that neither approach reliably enables accurate zero-shot multi-resolution generalization.

Contribution

Multi-resolution training protocol for robust generalization

The authors propose a simple data-driven training approach that includes data from multiple resolutions. They demonstrate that models can achieve robust multi-resolution inference by training primarily on inexpensive low-resolution data with small amounts of high-resolution data, maintaining computational efficiency while improving generalization.

The False Promise of Zero-Shot Super-Resolution in Machine-Learned Operators | Novelty Validation