Abstract:

We study the problem of posterior sampling in the context of score based generative models. We have a trained score network for a prior p(x)p(x), a measurement model p(yx)p(y|x), and are tasked with sampling from the posterior p(xy)p(x|y). Prior work has shown this to be intractable in KL (in the worst case) under well-accepted computational hardness assumptions. Despite this, popular algorithms for tasks such as image super-resolution, stylization, and reconstruction enjoy empirical success. Rather than establishing distributional assumptions or restricted settings under which exact posterior sampling is tractable, we view this as a more general "tilting" problem of biasing a distribution towards a measurement. Under minimal assumptions, we show that one can tractably sample from a distribution that is simultaneously close to the posterior of a noised prior in KL divergence and the true posterior in Fisher divergence. Intuitively, this combination ensures that the resulting sample is consistent with both the measurement and the prior. To the best of our knowledge these are the first formal results for (approximate) posterior sampling in polynomial time.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper contributes a tractable approximate posterior sampling algorithm with dual guarantees: closeness to a noised posterior in KL divergence and to the true posterior in Fisher divergence. It resides in the Convergence Theory and Provable Guarantees leaf, which contains four papers total, including this one. This leaf sits within the Theoretical Foundations and Algorithmic Frameworks branch, indicating a focus on formal analysis rather than application-driven heuristics. The small sibling count suggests this is a relatively sparse research direction within the broader score-based posterior sampling landscape.

The taxonomy tree shows that neighboring leaves include Geometric and Probabilistic Perspectives (two papers on Wasserstein flows and SDE frameworks) and Sequential and Simulation-Based Inference (seven papers on trajectory-based methods). The paper's dual-divergence framework diverges from purely geometric interpretations and from sequential updating schemes, instead addressing the single-step posterior sampling problem with explicit complexity bounds. The exclude_note for this leaf clarifies that empirical validation without theory belongs elsewhere, reinforcing that this work's theoretical guarantees are its distinguishing feature relative to methodological extensions in adjacent branches.

Among thirty candidates examined, the Annealed Langevin Monte Carlo algorithm contribution shows one refutable candidate out of ten examined, suggesting some overlap with prior annealing-based sampling schemes. The other two contributions—tractable dual guarantees and polynomial-time formal results—each examined ten candidates with zero refutations, indicating less direct prior work on these specific theoretical claims. The limited search scope means these statistics reflect top-thirty semantic matches, not an exhaustive survey, so additional related work may exist beyond this candidate pool.

Based on the limited search scope, the dual-divergence guarantee and polynomial-time complexity claims appear relatively novel within the examined candidates, while the annealing algorithm itself has at least one overlapping prior method. The taxonomy context confirms that provable guarantees remain an active but not overcrowded direction, with only four papers in this leaf. Acknowledging the top-thirty search boundary, the work's theoretical contributions seem to occupy a distinct niche, though broader literature may contain additional relevant comparisons.

Taxonomy

Core-task Taxonomy Papers
50
3
Claimed Contributions
30
Contribution Candidate Papers Compared
1
Refutable Paper

Research Landscape Overview

Core task: approximate posterior sampling with score-based generative models. This field centers on leveraging learned score functions—gradients of log-densities—to sample from complex posterior distributions, particularly in inverse problems where observations are corrupted or incomplete. The taxonomy reveals four main branches: Theoretical Foundations and Algorithmic Frameworks establish convergence guarantees and provable sampling schemes; Methodological Extensions and Training Strategies develop novel architectures and training procedures to improve score estimation; Inverse Problem Formulations and Measurement Models address diverse observation operators and noise structures; and Application Domains demonstrate practical impact across imaging, communications, and scientific inference. Representative works such as Diffusion Posterior Sampling[28] and Medical Imaging Score[1] illustrate how score-based priors can be combined with likelihood terms to tackle real-world reconstruction tasks, while foundational studies like Critically Damped Langevin[17] and Neural Score Estimation[40] refine the underlying sampling dynamics. A particularly active line of work focuses on establishing rigorous convergence theory and provable guarantees, ensuring that annealed or iterative sampling schemes reliably approximate true posteriors even under model mismatch or challenging measurement operators. Annealed Langevin[0] sits squarely within this branch, emphasizing theoretical rigor alongside algorithmic design. Nearby efforts such as Provable Probabilistic Imaging[5] and Provably Robust Score[38] similarly pursue formal guarantees, though they may differ in the specific noise models or operator classes they analyze. In contrast, works like Robust Diffusion Posterior[12] prioritize empirical robustness to distributional shifts, while Importance Sampling Score[2] and Importance Weighted Score[35] explore variance reduction through reweighting strategies. These contrasting emphases—provable convergence versus practical robustness, deterministic annealing schedules versus adaptive importance weighting—highlight ongoing questions about how best to balance theoretical soundness with computational efficiency and real-world applicability.

Claimed Contributions

Tractable approximate posterior sampling with dual guarantees

The authors establish that their algorithm can efficiently sample from a distribution satisfying two properties: closeness to the posterior of a noised prior in KL divergence (providing global correctness) and closeness to the true posterior in Fisher divergence (ensuring local consistency). This dual guarantee bypasses the known computational hardness of exact posterior sampling.

10 retrieved papers
Annealed Langevin Monte Carlo algorithm for posterior sampling

The authors propose the Annealed Langevin Monte Carlo (ALMC) algorithm that uses a warm-start phase followed by an annealing phase to track posteriors of progressively denoised priors. The method provides polynomial-time guarantees under minimal assumptions on the prior and measurement operator.

10 retrieved papers
Can Refute
First formal polynomial-time results for approximate posterior sampling

The authors claim to provide the first theoretical guarantees showing that approximate posterior sampling can be achieved in polynomial time, addressing a gap in the literature where prior work either lacked formal guarantees or established computational hardness for exact sampling.

10 retrieved papers

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

Tractable approximate posterior sampling with dual guarantees

The authors establish that their algorithm can efficiently sample from a distribution satisfying two properties: closeness to the posterior of a noised prior in KL divergence (providing global correctness) and closeness to the true posterior in Fisher divergence (ensuring local consistency). This dual guarantee bypasses the known computational hardness of exact posterior sampling.

Contribution

Annealed Langevin Monte Carlo algorithm for posterior sampling

The authors propose the Annealed Langevin Monte Carlo (ALMC) algorithm that uses a warm-start phase followed by an annealing phase to track posteriors of progressively denoised priors. The method provides polynomial-time guarantees under minimal assumptions on the prior and measurement operator.

Contribution

First formal polynomial-time results for approximate posterior sampling

The authors claim to provide the first theoretical guarantees showing that approximate posterior sampling can be achieved in polynomial time, addressing a gap in the literature where prior work either lacked formal guarantees or established computational hardness for exact sampling.