Iterative Amortized Inference: Unifying In-Context Learning and Learned Optimizers

ICLR 2026 Conference SubmissionAnonymous Authors
amortizationin-context learningmeta learninglearned optimizersstochastic optimization
Abstract:

Modern learning systems increasingly rely on amortized learning — the idea of reusing computation or inductive biases shared across tasks to enable rapid generalization to novel problems. This principle spans a range of approaches, including meta-learning, in-context learning, prompt tuning, learned optimizers and more. While motivated by similar goals, these approaches differ in how they encode and leverage task-specific information, often provided as in-context examples. In this work, we propose a unified framework which describes how such methods differ primarily in the aspects of learning they amortize — such as initializations, learned updates, or predictive mappings — and how they incorporate task data at inference. We introduce a taxonomy that categorizes amortized models into parametric, implicit, and explicit regimes, based on whether task adaptation is externalized, internalized, or jointly modeled. Building on this view, we identify a key limitation in current approaches: most methods struggle to scale to large datasets because their capacity to process task data at inference (e.g., context length) is often limited. To address this, we propose iterative amortized inference, a class of models that refine solutions step-by-step over mini-batches, drawing inspiration from stochastic optimization. Our formulation bridges optimization-based meta-learning with forward-pass amortization in models like LLMs, offering a scalable and extensible foundation for general-purpose task adaptation.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper proposes a unified framework for amortized learning methods, introducing a taxonomy that categorizes approaches into parametric, implicit, and explicit regimes based on how they externalize or internalize task adaptation. It resides in the 'Unified Frameworks for Amortized Learning' leaf alongside two sibling papers, making this a relatively sparse research direction within the broader taxonomy. The framework aims to clarify how methods like meta-learning, in-context learning, and learned optimizers differ in what aspects of learning they amortize and how they incorporate task data at inference time.

The taxonomy tree shows this leaf sits within 'Amortized Inference Frameworks and Theoretical Foundations', neighboring leaves on Bayesian inference methods and cognitive perspectives. Related branches include meta-learning approaches that focus on initialization strategies and domain-specific adaptations for vision or robotics. The scope note explicitly excludes application-specific methods, positioning this work as foundational rather than domain-specialized. Nearby work explores Bayesian posterior estimation and experimental design, suggesting the paper's framework must distinguish itself from purely probabilistic formulations while connecting to meta-learning paradigms that share similar rapid-adaptation goals.

Among twenty-four candidates examined across three contributions, no clearly refuting prior work was identified. The unified framework contribution examined ten candidates with zero refutable matches, the taxonomy of regimes examined five with none refutable, and the iterative inference proposal examined nine with none refutable. This suggests that within the limited search scope—primarily top-K semantic matches and citation expansion—the specific combination of a unifying taxonomy with iterative amortized inference appears distinct from existing formulations, though the search does not cover the entire literature on amortization or meta-learning.

Based on the limited search scope of twenty-four semantically similar papers, the work appears to occupy a relatively uncrowded position within unified frameworks for amortization. The taxonomy structure indicates this is a sparse leaf with only three total papers, and the contribution-level analysis found no overlapping prior work among examined candidates. However, the analysis does not exhaustively cover all meta-learning or amortization literature, leaving open the possibility of related frameworks in adjacent research communities.

Taxonomy

Core-task Taxonomy Papers
42
3
Claimed Contributions
24
Contribution Candidate Papers Compared
0
Refutable Paper

Research Landscape Overview

Core task: Amortized learning for rapid task adaptation. The field centers on training models that can quickly adapt to new tasks by amortizing the cost of inference or optimization across a distribution of related problems. The taxonomy reveals several complementary research directions: foundational work on amortized inference frameworks establishes theoretical principles and unified architectures for learning to infer efficiently, while meta-learning and few-shot adaptation methods focus on learning initialization strategies or update rules that enable rapid fine-tuning from limited data. Domain-specific branches apply these ideas to particular settings such as medical imaging, robotics skill transfer, or Bayesian experimental design, often tailoring amortization schemes to exploit structure in those domains. Meanwhile, efficient training and inference optimization addresses computational bottlenecks through techniques like neural architecture search or hardware-aware prediction, and transfer learning explores how to reuse knowledge across task families. Representative works like Neural Methods for Amortized[33] and Meta-Learning Probabilistic Inference For[40] illustrate how amortized approaches can replace costly iterative procedures with learned feed-forward mappings. Recent efforts reveal a tension between generality and specialization: some lines pursue broadly applicable amortized inference engines that handle diverse probabilistic models, as seen in Amortized Bayesian Meta-Learning with[8] and Amortized bayesian workflow[6], while others optimize for specific task structures to achieve superior speed or accuracy in narrow domains. Iterative Amortized Inference[0] sits within the unified frameworks cluster, emphasizing principled iterative refinement schemes that balance amortization with test-time adaptation. This contrasts with purely feed-forward amortizers like Amortized Inference for Efficient[3], which prioritize single-pass speed, and with meta-learning approaches such as Fast Task Inference with[17] that focus on learning task representations for downstream adaptation. The interplay between these strategies—whether to amortize inference completely, retain some iterative flexibility, or meta-learn adaptation procedures—remains an active area of exploration, with trade-offs in computational cost, sample efficiency, and generalization across task distributions.

Claimed Contributions

Unified framework for amortized learning methods

The authors introduce a general formulation that unifies meta-learning, in-context learning, prompt tuning, and learned optimizers under a common mathematical framework (Equation 5), showing how these methods differ in which components they learn and how they process task data.

10 retrieved papers
Taxonomy of amortization regimes

The authors propose a categorization scheme that classifies amortized learning approaches into three distinct regimes based on how they encode inductive biases and perform task adaptation, distinguishing methods by their treatment of task-specific versus task-invariant information.

5 retrieved papers
Iterative amortized inference framework

The authors introduce a scalable approach that addresses limitations in processing large task datasets by iteratively refining solutions through mini-batch updates, bridging optimization-based meta-learning with forward-pass amortization and enabling models to scale beyond context length constraints.

9 retrieved papers

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

Unified framework for amortized learning methods

The authors introduce a general formulation that unifies meta-learning, in-context learning, prompt tuning, and learned optimizers under a common mathematical framework (Equation 5), showing how these methods differ in which components they learn and how they process task data.

Contribution

Taxonomy of amortization regimes

The authors propose a categorization scheme that classifies amortized learning approaches into three distinct regimes based on how they encode inductive biases and perform task adaptation, distinguishing methods by their treatment of task-specific versus task-invariant information.

Contribution

Iterative amortized inference framework

The authors introduce a scalable approach that addresses limitations in processing large task datasets by iteratively refining solutions through mini-batch updates, bridging optimization-based meta-learning with forward-pass amortization and enabling models to scale beyond context length constraints.

Iterative Amortized Inference: Unifying In-Context Learning and Learned Optimizers | Novelty Validation