CL-DPS: A Contrastive Learning Approach to Blind Nonlinear Inverse Problem Solving via Diffusion Posterior Sampling

ICLR 2026 Conference SubmissionAnonymous Authors
Diffusion ModelsBlind Inverse ProblemsContrastive Learning
Abstract:

Diffusion models (DMs) have recently become powerful priors for solving inverse problems. However, most work focuses on non-blind settings with known measurement operators, and existing DM-based blind solvers largely assume linear measurements, which limits practical applicability where operators are frequently nonlinear. We introduce CL-DPS, a contrastively trained likelihood for diffusion posterior sampling that requires no knowledge of the operator parameters at inference. To the best of our knowledge, CL-DPS is the first DM-based framework capable of solving blind nonlinear inverse problems. Our key idea is to train an auxiliary encoder offline, using a MoCo-style contrastive objective over randomized measurement operators, to learn a surrogate for the conditional likelihood $p(\boldsymbol{y} | \boldsymbol{x}_t)$. During sampling, we inject the surrogate's gradient as a guidance term along the reverse diffusion trajectory, which enables posterior sampling without estimating or inverting the forward operator. We further employ overlapping patch-wise inference to preserve fine structure and a lightweight color-consistency head to stabilize color statistics. The guidance is sampler-agnostic and pairs well with modern solvers (e.g., DPM-Solver++ (2M)). Extensive experiments show that CL-DPS effectively handles challenging nonlinear cases, such as rotational and zoom deblurring, where prior DM-based methods fail, while remaining competitive on standard linear benchmarks. Code: \url{https://anonymous.4open.science/r/CL-DPS-4F5D}.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper introduces CL-DPS, a framework for solving blind nonlinear inverse problems using diffusion posterior sampling with a contrastively trained likelihood surrogate. It resides in the 'Contrastive and Measurement-Conditioned Priors for Blind Problems' leaf, which contains only two papers total (including this one). This sparse population suggests the specific combination of contrastive learning and blind nonlinear operator handling represents a relatively underexplored niche within the broader field of diffusion-based inverse problem solving, which encompasses fifty papers across thirty-six distinct research directions.

The taxonomy reveals that CL-DPS sits within the 'Blind and Operator-Unknown Inverse Problems' branch, which includes four leaves addressing joint estimation, contrastive priors, fast inversion, and domain-specific blind problems. Neighboring branches tackle likelihood approximation mechanisms (five leaves, fourteen papers) and nonlinear forward models (three leaves, five papers). The scope notes indicate CL-DPS bridges two traditionally separate concerns: handling unknown operators (the blind problem) and managing nonlinear measurement physics. Most prior work in adjacent leaves either assumes known operators or restricts to linear measurements, positioning CL-DPS at the intersection of these challenges.

Among thirty candidates examined through semantic search, none clearly refute the three core contributions: the CL-DPS framework itself (ten candidates examined, zero refutable), the theoretical energy-based justification (ten candidates, zero refutable), and the patch-wise inference with information-theoretic guarantees (ten candidates, zero refutable). The single sibling paper in the same taxonomy leaf (PRISM) addresses measurement conditioning but does not explicitly combine contrastive learning with blind nonlinear operator handling. This limited search scope suggests that within the examined literature, the specific technical approach appears novel, though the analysis does not cover exhaustive prior work beyond top-thirty semantic matches.

Based on the taxonomy structure and contribution-level statistics, CL-DPS appears to occupy a genuinely sparse research direction where contrastive learning meets blind nonlinear inverse problems. The absence of refutable candidates across thirty examined papers, combined with the leaf's minimal population, suggests substantive novelty within the scope analyzed. However, this assessment is constrained by the limited search methodology and does not preclude the existence of relevant work outside the top-thirty semantic neighborhood or in adjacent research communities not captured by the taxonomy construction process.

Taxonomy

Core-task Taxonomy Papers
50
3
Claimed Contributions
30
Contribution Candidate Papers Compared
0
Refutable Paper

Research Landscape Overview

Core task: blind nonlinear inverse problem solving via diffusion posterior sampling. This field addresses the challenge of reconstructing unknown signals from measurements when the forward operator itself is uncertain or unknown, leveraging diffusion models as powerful priors. The taxonomy reveals several complementary research directions. Likelihood Approximation and Guidance Mechanisms explores how to steer diffusion sampling toward measurement consistency, with works like Gaussian DPS[2] and Adaptive Guidance Scale[27] refining gradient-based guidance strategies. Blind and Operator-Unknown Inverse Problems tackles scenarios where the degradation operator must be inferred jointly with the signal, as seen in Blind Latent Diffusion[6] and PRISM[41]. Nonlinear Forward Models and Complex Measurements handles intricate measurement physics beyond linear operators, exemplified by Nonlinear CT DPS[3] and Subspace Tomography[4]. Posterior Sampling Algorithms and Convergence investigates the theoretical and practical aspects of sampling schemes, including Ensemble Sampling[7] and Filtering Perspective[9]. Plug-and-Play and Training-Free Methods emphasizes flexibility by integrating pre-trained models without retraining, as in Plug-and-Play Posterior[8]. Domain-Specific Applications spans diverse fields from medical imaging to audio processing, while Noisy Measurements and Robustness addresses measurement corruption and stability concerns. Particularly active lines of work contrast training-free guidance methods, which adapt pre-trained diffusion models at test time, against approaches that learn measurement-conditioned or operator-aware priors. Another key tension lies between methods that assume known forward operators and those that tackle fully blind settings where both signal and operator are unknown. CL-DPS[0] sits within the Blind and Operator-Unknown branch, specifically under contrastive and measurement-conditioned priors. It shares thematic ground with PRISM[41], which also addresses blind inverse problems through measurement conditioning, but CL-DPS[0] emphasizes contrastive learning to disentangle operator uncertainty from signal reconstruction. Compared to Blind Latent Diffusion[6], which operates in latent space, CL-DPS[0] focuses on leveraging contrastive structures to guide posterior sampling in blind scenarios. This positioning highlights ongoing efforts to balance expressiveness of learned priors with the flexibility needed when forward models are partially or entirely unknown.

Claimed Contributions

CL-DPS framework for blind nonlinear inverse problems

The authors propose CL-DPS, a diffusion posterior sampling method that uses contrastive learning to train an auxiliary encoder offline. This encoder learns a surrogate for the conditional likelihood without requiring knowledge of measurement operator parameters, enabling the first diffusion model-based solution to blind nonlinear inverse problems.

10 retrieved papers
Theoretical justification via Lemma 1 and energy-based formulation

The authors provide theoretical grounding by proving that the gradient of their contrastive softmax surrogate converges to the true likelihood gradient as the dictionary size increases, justifying the use of contrastive learning for likelihood estimation in diffusion posterior sampling.

10 retrieved papers
Overlapping patch-wise inference with information-theoretic guarantee

The authors develop an overlapping patch-wise inference method that divides images into patches during inference. They prove via Theorem 1 that using more overlapping patches increases the mutual information between the input signal and the encoder output, thereby improving reconstruction quality.

10 retrieved papers

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Within the taxonomy built over the current TopK core-task papers, the original paper is assigned to a leaf with no direct siblings and no cousin branches under the same grandparent topic. In this retrieved landscape, it appears structurally isolated, which is one partial signal of novelty, but still constrained by search coverage and taxonomy granularity.

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

CL-DPS framework for blind nonlinear inverse problems

The authors propose CL-DPS, a diffusion posterior sampling method that uses contrastive learning to train an auxiliary encoder offline. This encoder learns a surrogate for the conditional likelihood without requiring knowledge of measurement operator parameters, enabling the first diffusion model-based solution to blind nonlinear inverse problems.

Contribution

Theoretical justification via Lemma 1 and energy-based formulation

The authors provide theoretical grounding by proving that the gradient of their contrastive softmax surrogate converges to the true likelihood gradient as the dictionary size increases, justifying the use of contrastive learning for likelihood estimation in diffusion posterior sampling.

Contribution

Overlapping patch-wise inference with information-theoretic guarantee

The authors develop an overlapping patch-wise inference method that divides images into patches during inference. They prove via Theorem 1 that using more overlapping patches increases the mutual information between the input signal and the encoder output, thereby improving reconstruction quality.