Context Parametrization with Compositional Adapters

ICLR 2026 Conference SubmissionAnonymous Authors
LLMscontext parametrizationin-context learningmeta-learningadapter generationcompositionalityefficiency
Abstract:

Large language models (LLMs) often seamlessly adapt to new tasks through in-context learning (ICL) or supervised fine-tuning (SFT). However, both of these approaches face key limitations: ICL is inefficient when handling many demonstrations, and SFT incurs training overhead while sacrificing flexibility.
Mapping instructions or demonstrations from context directly into adapter parameters offers an appealing alternative. While prior work explored generating adapters based on a single input context, it has overlooked the need to integrate multiple chunks of information. To address this gap, we introduce CompAs, a meta-learning framework that translates context into adapter parameters with a compositional structure.
Adapters generated this way can be merged algebraically, enabling instructions, demonstrations, or retrieved passages to be seamlessly combined without reprocessing long prompts.
Critically, this approach yields three benefits: lower inference cost, robustness to long-context instability, and establishes a principled solution when input exceeds the model’s context window.
Furthermore, CompAs encodes information into adapter parameters in a reversible manner, enabling recovery of input context through a decoder, facilitating safety and security. Empirical results on diverse multiple-choice and extractive question answering tasks show that \method outperforms ICL and prior generator-based methods, especially when scaling to more inputs. Our work establishes composable adapter generation as a practical and efficient alternative for scaling LLM deployment.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper introduces CompAs, a meta-learning framework that generates adapter parameters from multiple context chunks and enables algebraic composition of these adapters. According to the taxonomy, this work resides in the 'Compositional Multi-Context Adapter Generation' leaf under 'Context-to-Adapter Generation Methods'. Notably, this leaf contains only the original paper itself—no sibling papers are listed. The parent category 'Context-to-Adapter Generation Methods' contains just one other leaf ('Single-Pass Generative Adapter Synthesis'), suggesting this is a relatively sparse research direction within the broader adapter landscape.

The taxonomy reveals that the broader field encompasses four main branches: context-to-adapter generation, task-aware adapter design, multi-task composition, and specialized applications. The paper's approach bridges context-to-adapter generation with multi-task composition concerns, as it addresses how to integrate multiple information sources without reprocessing. Neighboring work in 'Task-Aware and Context-Oriented Adapter Design' focuses on structural priors and task decomposition, while 'Multi-Task Adapter Composition and Fusion' explores combining pre-trained adapters. CompAs diverges by generating composable adapters on-the-fly rather than fusing pre-existing modules, positioning it at a distinct methodological intersection.

Among the three contributions analyzed, the literature search examined 30 candidate papers total. The core CompAs framework and theoretical composition conditions each examined 10 candidates with zero refutable prior work identified. The reversible encoding contribution examined 10 candidates and found 1 that appears to provide overlapping prior work. This suggests the compositional generation mechanism and theoretical foundations represent relatively unexplored territory within the limited search scope, while the reversibility aspect has at least some precedent. The analysis explicitly notes this is based on top-K semantic search plus citation expansion, not exhaustive coverage.

Given the limited search scope of 30 candidates and the sparse taxonomy leaf containing only this paper, the work appears to occupy a relatively novel position within context-driven adapter generation. However, the single-paper leaf status and modest search scale mean substantial related work may exist outside the examined candidates. The reversibility finding indicates at least one dimension has prior exploration, warranting careful positioning against that specific precedent.

Taxonomy

Core-task Taxonomy Papers
10
3
Claimed Contributions
30
Contribution Candidate Papers Compared
1
Refutable Paper

Research Landscape Overview

Core task: generating compositional adapter parameters from context for language models. The field centers on making language models more flexible by dynamically producing or combining adapter modules based on input context, rather than relying solely on fixed, pre-trained adapters. The taxonomy reveals four main branches. Context-to-Adapter Generation Methods explore techniques that directly map contextual signals into adapter weights, enabling on-the-fly parameterization. Task-Aware and Context-Oriented Adapter Design focuses on crafting adapters that are sensitive to task-specific or contextual cues, often leveraging structural priors or domain knowledge. Multi-Task Adapter Composition and Fusion investigates how to merge or orchestrate multiple adapters—such as AdapterFusion[4]—to handle diverse tasks simultaneously. Finally, Specialized Adapter Applications and Architectures address domain-specific use cases and novel architectural variants, demonstrating the breadth of adapter-based approaches across different problem settings. Recent work has intensified around compositional and generative strategies. Several studies, including Corda[2], CorDA[3], and Generative Adapter[5], emphasize learning to produce adapter parameters conditioned on input or task context, trading off between expressiveness and computational overhead. Compositional Adapters[0] sits squarely within this line of inquiry, focusing on multi-context scenarios where adapters must be generated and composed dynamically. It shares conceptual ground with CorDA[3], which also targets context-driven adapter generation, but Compositional Adapters[0] places stronger emphasis on handling multiple contextual signals simultaneously. Meanwhile, approaches like Structural Priors Adapters[1] and Caila[7] explore how to inject inductive biases or task-aware structures into adapter design, offering a complementary perspective on context sensitivity. Together, these directions highlight ongoing questions about scalability, interpretability, and the optimal granularity for context-driven parameterization.

Claimed Contributions

COMPAS meta-learning framework for compositional adapter generation

The authors propose COMPAS, a teacher-student framework that maps contextual information (instructions, demonstrations, or retrieved passages) into adapter parameters that can be algebraically merged. This enables seamless combination of multiple information sources without reprocessing long prompts, addressing efficiency and long-context instability issues.

10 retrieved papers
Theoretical conditions for parameter-space composition

The authors formalize compositionality requirements through a monoid homomorphism framework and prove a compositionality bound (Theorem 1) that decomposes student-teacher error into generator additivity error and misfit on concatenated contexts, providing theoretical guarantees for when adapter addition approximates context concatenation.

10 retrieved papers
Reversible context encoding with reconstruction capability

The framework includes a reconstruction objective that allows the model to decode and recover the original input context from adapter parameters, providing a mechanism for verifying what information has been encoded and supporting safety and security requirements.

10 retrieved papers
Can Refute

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Within the taxonomy built over the current TopK core-task papers, the original paper is assigned to a leaf with no direct siblings and no cousin branches under the same grandparent topic. In this retrieved landscape, it appears structurally isolated, which is one partial signal of novelty, but still constrained by search coverage and taxonomy granularity.

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

COMPAS meta-learning framework for compositional adapter generation

The authors propose COMPAS, a teacher-student framework that maps contextual information (instructions, demonstrations, or retrieved passages) into adapter parameters that can be algebraically merged. This enables seamless combination of multiple information sources without reprocessing long prompts, addressing efficiency and long-context instability issues.

Contribution

Theoretical conditions for parameter-space composition

The authors formalize compositionality requirements through a monoid homomorphism framework and prove a compositionality bound (Theorem 1) that decomposes student-teacher error into generator additivity error and misfit on concatenated contexts, providing theoretical guarantees for when adapter addition approximates context concatenation.

Contribution

Reversible context encoding with reconstruction capability

The framework includes a reconstruction objective that allows the model to decode and recover the original input context from adapter parameters, providing a mechanism for verifying what information has been encoded and supporting safety and security requirements.