Landing with the Score: Riemannian Optimization through Denoising

ICLR 2026 Conference SubmissionAnonymous Authors
Riemannian OptimizationManifold LearningDiffusion Models
Abstract:

Under the \emph{data manifold hypothesis}, high-dimensional data concentrate near a low-dimensional manifold. We study Riemannian optimization when this manifold is only given implicitly through the data distribution, and standard geometric operations are unavailable. This formulation captures a broad class of data-driven design problems that are central to modern generative AI. Our key idea is a \emph{link function} that ties the data distribution to the geometric quantities needed for optimization: its gradient and Hessian recover the projection onto the manifold and its tangent space in the small-noise regime. This construction is directly connected to the score function in diffusion models, allowing us to leverage well-studied parameterizations, efficient training procedures, and even pretrained score networks from the diffusion model literature to perform optimization. On top of this foundation, we develop two {efficient} inference-time algorithms for optimization over data manifolds: \emph{Denoising Landing Flow} (DLF) and \emph{Denoising Riemannian Gradient Descent} (DRGD). We provide theoretical guarantees for approximate feasibility (manifold adherence) and optimality (small Riemannian gradient norm). We demonstrate the effectiveness of our approach on finite-horizon reference tracking tasks in data-driven control, illustrating their potential for practical generative and design applications.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper proposes a link function framework connecting data distributions to Riemannian geometric quantities, enabling optimization over implicitly defined manifolds via score-based methods. It introduces two algorithms—Denoising Landing Flow and Denoising Riemannian Gradient Descent—with non-asymptotic convergence guarantees. Within the taxonomy, this work occupies the 'Optimization via Denoising and Score-Based Methods' leaf under 'Optimization Algorithms and Computational Methods'. Notably, this leaf contains only the original paper itself, indicating a sparse research direction with no identified sibling papers in the taxonomy structure.

The taxonomy reveals that neighboring leaves contain related but distinct approaches. 'Gradient-Based Riemannian Optimization' includes five papers on classical Riemannian methods using retractions and parallel transport, while 'Meta-Optimization and Learned Optimizers' explores neural network-based optimization strategies. The parent branch 'Optimization Algorithms and Computational Methods' encompasses diverse algorithmic paradigms including implicit integration schemes and population-based methods. The paper's focus on score functions and denoising distinguishes it from these classical geometric approaches, positioning it at the intersection of diffusion models and Riemannian optimization—a boundary explicitly noted in the taxonomy's scope definitions.

Among thirty candidates examined through semantic search, none were found to clearly refute any of the three main contributions. For the link function connecting distributions to geometry, ten candidates were examined with zero refutable matches. Similarly, the two score-based algorithms and the convergence guarantees each had ten candidates examined, yielding no clear prior work providing overlapping results. This absence of refutation within the limited search scope suggests the specific combination of score-based methods with Riemannian optimization theory may represent relatively unexplored territory, though the search scale precludes definitive conclusions about the broader literature.

The analysis reflects a focused semantic search rather than exhaustive coverage of geometric optimization or diffusion modeling literature. The taxonomy structure shows active research in adjacent areas—particularly classical Riemannian methods and representation learning—but the specific integration of score functions with manifold optimization appears less populated. The limited search scope and sparse taxonomy leaf suggest potential novelty, though comprehensive assessment would require broader examination of diffusion model literature and recent geometric deep learning developments.

Taxonomy

Core-task Taxonomy Papers
50
3
Claimed Contributions
30
Contribution Candidate Papers Compared
0
Refutable Paper

Research Landscape Overview

Core task: Riemannian optimization over implicitly defined data manifolds. This field addresses the challenge of performing optimization when the underlying geometric structure is not given explicitly but must be inferred or approximated from data. The taxonomy reveals a rich landscape organized into six main branches. Theoretical Foundations and Geometric Frameworks establish the mathematical underpinnings, including metric learning and curvature-aware methods such as Curvature-Adaptive Transformers[49]. Optimization Algorithms and Computational Methods encompass a variety of algorithmic strategies, from score-based and denoising approaches to proximal and projection techniques like Tangent Space Proxies[5] and Implicit Riemannian Optimism[6]. Representation Learning and Generative Modeling focuses on learning latent manifold structures, with works such as Riemannian Diffeomorphic Autoencoding[3] and Manifold Gaussian Variational[19] exploring how to encode data geometry. Application Domains span robotics, computer vision, and scientific computing, while Specialized Techniques address extensions like multi-manifold clustering and tensor methods. Survey and Methodological Reviews, including Geometric Optimization Survey[14], provide integrative perspectives on the field's evolution. A particularly active line of work centers on optimization via denoising and score-based methods, which leverage diffusion models and score matching to navigate implicit manifolds without explicit parameterization. Riemannian Denoising[0] exemplifies this direction by integrating denoising objectives with Riemannian geometry, offering a principled way to handle noise and curvature simultaneously. This approach contrasts with more classical projection-based methods like Tangent Space Proxies[5], which rely on local linear approximations, and with meta-learning frameworks such as Riemannian Meta-Optimization[11] that adapt optimization strategies across tasks. Nearby works like Iso-Riemannian Optimization[2] and Only Bayes Manifold[4] explore related themes of preserving geometric structure during optimization and probabilistic inference on manifolds. The original paper sits at the intersection of geometric rigor and practical denoising, addressing the trade-off between computational tractability and fidelity to the underlying manifold structure—a central tension across many branches of this taxonomy.

Claimed Contributions

Link function connecting data distribution to manifold geometry

The authors introduce a link function based on smoothing the data distribution with a Gaussian kernel. They prove that the gradient and Hessian of this link function approximate the projection onto the data manifold and its tangent space as the smoothing parameter decreases, enabling data-driven manifold operations.

10 retrieved papers
Two score-based algorithms for optimization over data manifolds

The authors develop two novel algorithms that leverage pretrained score networks from diffusion models to perform Riemannian optimization on data manifolds. These methods require only inference-time queries and do not need additional training if a pretrained score network is available.

10 retrieved papers
Non-asymptotic convergence guarantees for approximate feasibility and optimality

The authors provide theoretical guarantees demonstrating that their algorithms converge to points that are approximately feasible on the manifold and approximately optimal in terms of Riemannian gradient norm, with error bounds that vanish as the smoothing parameter approaches zero.

10 retrieved papers

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Within the taxonomy built over the current TopK core-task papers, the original paper is assigned to a leaf with no direct siblings and no cousin branches under the same grandparent topic. In this retrieved landscape, it appears structurally isolated, which is one partial signal of novelty, but still constrained by search coverage and taxonomy granularity.

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

Link function connecting data distribution to manifold geometry

The authors introduce a link function based on smoothing the data distribution with a Gaussian kernel. They prove that the gradient and Hessian of this link function approximate the projection onto the data manifold and its tangent space as the smoothing parameter decreases, enabling data-driven manifold operations.

Contribution

Two score-based algorithms for optimization over data manifolds

The authors develop two novel algorithms that leverage pretrained score networks from diffusion models to perform Riemannian optimization on data manifolds. These methods require only inference-time queries and do not need additional training if a pretrained score network is available.

Contribution

Non-asymptotic convergence guarantees for approximate feasibility and optimality

The authors provide theoretical guarantees demonstrating that their algorithms converge to points that are approximately feasible on the manifold and approximately optimal in terms of Riemannian gradient norm, with error bounds that vanish as the smoothing parameter approaches zero.

Landing with the Score: Riemannian Optimization through Denoising | Novelty Validation