Multifidelity Simulation-based Inference for Computationally Expensive Simulators

ICLR 2026 Conference SubmissionAnonymous Authors
simulation-based inferencelikelihood-free inferenceBayesian inferencetransfer learningmultifidelityneuroscience
Abstract:

Across many domains of science, stochastic models are an essential tool to understand the mechanisms underlying empirically observed data. Models can be of different levels of detail and accuracy, with models of high-fidelity (i.e., high accuracy) to the phenomena under study being often preferable. However, inferring parameters of high-fidelity models via simulation-based inference is challenging, especially when the simulator is computationally expensive. We introduce MF-(TS)NPE, a multifidelity approach to neural posterior estimation that uses transfer learning to leverage inexpensive low-fidelity simulations to efficiently infer parameters of high-fidelity simulators. MF-(TS)NPE applies the multifidelity scheme to both amortized and non-amortized neural posterior estimation. We further improve simulation efficiency by introducing A-MF-TSNPE, a sequential variant that uses an acquisition function targeting the predictive uncertainty of the density estimator to adaptively select high-fidelity parameters. On established benchmark and neuroscience tasks, our approaches require up to two orders of magnitude fewer high-fidelity simulations than current methods, while showing comparable performance. Overall, our approaches open new opportunities to perform efficient Bayesian inference on computationally expensive simulators.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper introduces MF-(TS)NPE, a multifidelity approach to neural posterior estimation that uses transfer learning to leverage low-fidelity simulations for efficient parameter inference in high-fidelity simulators. It resides in the 'Neural Simulation-Based Inference with Multifidelity' leaf, which contains only three papers total, indicating a relatively sparse and emerging research direction. This leaf sits within the broader 'Multifidelity Bayesian Inference' branch, which encompasses approximately twelve papers across five distinct methodological clusters, suggesting the neural simulation-based approach represents a minority but growing subfield.

The taxonomy reveals that neighboring leaves pursue alternative inference strategies: 'Gaussian Process-Based Multifidelity Inference' contains four papers using co-kriging and GP surrogates, while 'Multifidelity MCMC and Delayed Acceptance' explores sampling-based methods. The 'Multifidelity Surrogate Modeling' branch (fourteen papers across four leaves) focuses on emulator construction without explicit inference loops, representing a complementary but distinct research direction. The paper's neural density estimation approach diverges from these GP-centric and MCMC-based methods, positioning it at the intersection of modern deep learning and classical multifidelity modeling.

Among thirty candidates examined, the contribution-level analysis reveals mixed novelty signals. The core MF-(TS)NPE framework examined ten candidates with one refutable match, suggesting some prior work addresses multifidelity neural posterior estimation. The sequential acquisition function variant (MF-TSNPE-AF) similarly found one refutable candidate among ten examined. However, the empirical analysis of transfer learning effectiveness in multifidelity simulation-based inference examined ten candidates with zero refutations, indicating this specific investigation may represent a less-explored angle within the limited search scope.

Given the sparse three-paper leaf and the limited thirty-candidate search, the work appears to occupy an emerging niche where neural simulation-based inference meets multifidelity modeling. The presence of refutable candidates for two contributions suggests the core technical ideas have some precedent, though the scale of prior work remains unclear beyond the top-thirty semantic matches examined. The taxonomy structure indicates this neural approach is less established than GP-based or MCMC alternatives in the multifidelity inference landscape.

Taxonomy

Core-task Taxonomy Papers
50
3
Claimed Contributions
30
Contribution Candidate Papers Compared
2
Refutable Paper

Research Landscape Overview

Core task: Bayesian inference for computationally expensive simulators using multifidelity models. This field addresses the challenge of performing rigorous statistical inference when high-fidelity simulations are prohibitively costly, by strategically combining information from multiple model resolutions or approximations. The taxonomy organizes the landscape into four main branches. Multifidelity Bayesian Optimization focuses on efficiently searching design or parameter spaces by querying cheaper low-fidelity models more frequently while reserving expensive high-fidelity evaluations for promising regions, as exemplified by works like Multifidelity Bayesian Optimization Review[2] and Constrained Multiobjective Multifidelity[6]. Multifidelity Bayesian Inference emphasizes posterior estimation and uncertainty quantification, often employing neural or simulation-based techniques to fuse information across fidelities. Multifidelity Surrogate Modeling develops emulators that learn discrepancies or correlations between fidelity levels, enabling fast approximations for downstream tasks such as optimization or sensitivity analysis; representative approaches include Diffusion Surrogate Modeling[3] and Bayesian Neural Multifidelity[5]. Finally, Methodological Foundations and Transfer Learning explores theoretical underpinnings and strategies for transferring knowledge across related simulation settings, as seen in Cosmology Transfer Learning[41] and Local Transfer Learning[24]. Across these branches, a central trade-off concerns how aggressively to exploit cheap approximations versus when to invest in costly high-fidelity runs, with many studies exploring adaptive sampling and hierarchical modeling strategies. Within the Multifidelity Bayesian Inference branch, a particularly active line of work leverages neural simulation-based inference to handle complex, implicit likelihoods. Multifidelity Simulation Inference[0] sits squarely in this neural inference cluster, sharing methodological kinship with Multilevel Neural Inference[14], which also targets scalable posterior approximation by combining neural density estimators with hierarchical fidelity structures. Compared to Cosmology Transfer Learning[41], which emphasizes domain adaptation for specific scientific applications, Multifidelity Simulation Inference[0] focuses more broadly on the algorithmic machinery for fusing neural surrogates across fidelities. This positioning highlights an emerging emphasis on flexible, data-driven inference frameworks that can accommodate diverse simulator types without requiring explicit likelihood functions.

Claimed Contributions

MF-(TS)NPE: Multifidelity Neural Posterior Estimation with Transfer Learning

The authors propose a multifidelity simulation-based inference method that pre-trains a neural density estimator on low-fidelity simulations and then fine-tunes it on a smaller set of high-fidelity simulations. This approach applies to both amortized (NPE) and non-amortized (TSNPE) neural posterior estimation, reducing the number of required high-fidelity simulations by up to two orders of magnitude.

10 retrieved papers
Can Refute
MF-TSNPE-AF: Sequential Variant with Acquisition Function

The authors develop a sequential extension of their multifidelity method that incorporates an acquisition function based on epistemic uncertainty. This active learning strategy adaptively selects which high-fidelity parameters to simulate, further enhancing simulation efficiency for non-amortized posterior estimation.

10 retrieved papers
Can Refute
Empirical Analysis of Transfer Learning Effectiveness in Multifidelity SBI

The authors investigate when pre-training on low-fidelity simulations helps transfer learning by conducting controlled experiments. They demonstrate that effectiveness depends on both mutual information between low- and high-fidelity simulators and representational coherence, providing empirical insights into the conditions under which multifidelity approaches succeed.

10 retrieved papers

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

MF-(TS)NPE: Multifidelity Neural Posterior Estimation with Transfer Learning

The authors propose a multifidelity simulation-based inference method that pre-trains a neural density estimator on low-fidelity simulations and then fine-tunes it on a smaller set of high-fidelity simulations. This approach applies to both amortized (NPE) and non-amortized (TSNPE) neural posterior estimation, reducing the number of required high-fidelity simulations by up to two orders of magnitude.

Contribution

MF-TSNPE-AF: Sequential Variant with Acquisition Function

The authors develop a sequential extension of their multifidelity method that incorporates an acquisition function based on epistemic uncertainty. This active learning strategy adaptively selects which high-fidelity parameters to simulate, further enhancing simulation efficiency for non-amortized posterior estimation.

Contribution

Empirical Analysis of Transfer Learning Effectiveness in Multifidelity SBI

The authors investigate when pre-training on low-fidelity simulations helps transfer learning by conducting controlled experiments. They demonstrate that effectiveness depends on both mutual information between low- and high-fidelity simulators and representational coherence, providing empirical insights into the conditions under which multifidelity approaches succeed.