Native Adaptive Solution Expansion for Diffusion-based Combinatorial Optimization

ICLR 2026 Conference SubmissionAnonymous Authors
mask diffusion modelneural combinatorial optimization
Abstract:

One central challenge in Neural Combinatorial Optimization (NCO) is handling hard constraints efficiently. Beyond the two classic paradigms, i.e., Local Construction (LC), which sequentially builds feasible solutions but scales poorly, and Global Prediction (GP), which produces one-shot heatmaps yet struggles with constraint conflicts, the recently proposed Adaptive Expansion (AE) shares the advantages of both by progressively growing partial solutions with instance-wise global awareness. However, existing realizations bolt AE onto external GP predictors, so their solution quality is bounded by the backbone and their inference cost scales with repeated global calls. In this paper, we fundamentally rethink adaptive expansion and make it native to a generative model, acting as its intrinsic decoding principle rather than an external wrapper. We propose NEXCO, a CO-specific masked diffusion framework that turns adaptive expansion into the model’s own iterative unmasking process. Specifically, it involves a solution-expansion training procedure with a time-agnostic GNN denoiser, which learns diffusion trajectories between fully masked solutions and ground-truth solutions. With the trained time-agnostic denoiser, we introduce a novel solution expansion scheme at the solving stage, enabling adaptive control over the intermediate solution states. It is achieved by constructing candidate sets according to confidence scores and applying feasibility projection to expand the solution while respecting constraints. In this way, ``adaptive" is not an afterthought but the decoding itself: intermediate diffusion states are meaningful partial solutions and progress is instance-adaptive rather than schedule-bound. Extensive experiments on representative CO problems show that NEXCO achieves approximately 50% improvement in solution quality and up to 4×4\times faster inference compared to prior state-of-the-art solvers.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper proposes NEXCO, a masked diffusion framework that integrates adaptive expansion directly into the generative model's iterative unmasking process rather than relying on external global predictors. It sits within the Adaptive Expansion Methods leaf, which contains only four papers including this one. This is a relatively sparse research direction compared to more crowded areas like Vehicle Routing Problems (five papers) or Scheduling Problems (seven papers), suggesting that adaptive expansion as a paradigm is still emerging and less explored than traditional sequential or one-shot approaches.

The taxonomy tree reveals that Adaptive Expansion Methods is one of three construction paradigms alongside Local Construction Methods (five papers) and Global Prediction Methods (three papers). Neighboring branches include Solution Improvement and Refinement, which focuses on post-construction enhancement rather than initial building strategies, and Representation and Encoding, which addresses how problem structure is embedded. The scope note for Adaptive Expansion explicitly excludes purely sequential construction without global awareness and one-shot prediction, positioning this work at the intersection of progressive building and instance-aware guidance. The sibling papers in this leaf explore related themes: partial solution strategies, inference-time adaptation, and progressive construction, but the taxonomy structure suggests these approaches remain distinct from the diffusion-based formulation proposed here.

Among the thirteen candidates examined through limited semantic search, none clearly refute any of the three contributions. Contribution A (Native Adaptive Expansion paradigm) examined ten candidates with zero refutable matches, Contribution B (CO-specific masked diffusion with time-agnostic denoiser) examined two candidates with zero refutations, and Contribution C (solution expansion inference with feasibility projection) examined one candidate with zero refutations. This suggests that within the limited search scope, the specific combination of diffusion-based generative modeling and native adaptive expansion appears relatively unexplored. However, the small candidate pool (thirteen total) means the analysis cannot rule out relevant prior work beyond the top semantic matches examined.

Based on the limited literature search covering thirteen candidates, the work appears to occupy a novel position by making adaptive expansion intrinsic to a diffusion model rather than an external wrapper. The sparse population of the Adaptive Expansion Methods leaf and the absence of refuting candidates within the examined scope suggest potential novelty, though the analysis acknowledges it does not cover the full breadth of diffusion-based combinatorial optimization or masked generative modeling literature beyond the top semantic matches.

Taxonomy

Core-task Taxonomy Papers
50
3
Claimed Contributions
13
Contribution Candidate Papers Compared
0
Refutable Paper

Research Landscape Overview

Core task: neural combinatorial optimization with adaptive solution expansion. The field has evolved into a rich ecosystem of approaches that can be organized around several major themes. Solution Construction Paradigms explore how neural models build solutions incrementally or in parallel, with Adaptive Expansion Methods focusing on dynamically growing partial solutions based on learned policies. Solution Improvement and Refinement branches emphasize post-construction enhancement through local search or iterative refinement, while Problem-Specific Applications target domains such as vehicle routing, scheduling, and assembly planning. Unified and Cross-Problem Frameworks seek architectures that generalize across multiple combinatorial tasks, and Representation and Encoding branches investigate how to embed problem structure into neural inputs. Training and Optimization Strategies address learning objectives and sample efficiency, Constraint Handling ensures feasibility, and Solution Diversity and Exploration tackle the balance between exploitation and broad search. Representative works span from early surveys like Machine Learning Combinatorial[8] and Graph Learning Survey[3] to recent scalable methods such as Scaling Neural Heuristics[2] and UniCO[4], illustrating the field's maturation from foundational concepts to large-scale deployment. Recent activity highlights contrasts between construction-focused and improvement-focused paradigms, with many studies exploring hybrid strategies that combine both. Within the Adaptive Expansion Methods branch, Native Adaptive Solution[0] exemplifies approaches that learn to expand partial solutions step-by-step, adjusting the construction process based on intermediate states. This contrasts with methods like Partial Solution Strategies[43], which emphasize starting from incomplete configurations and filling gaps, and Inference Time Adaptation[48], which tunes policies during deployment rather than purely at training time. Native Adaptive Solution[0] shares the incremental construction philosophy with Progressive Distillation[28] but distinguishes itself by focusing on adaptive decision-making at each expansion step rather than distilling fixed heuristics. Meanwhile, works such as Boosting Large VRP[5] and Neural Real-World Routing[1] push the boundaries of scale and real-world applicability, raising open questions about how adaptive expansion strategies can maintain solution quality and computational efficiency as problem sizes grow. The landscape reflects ongoing exploration of when to construct, when to refine, and how to balance learned flexibility with hard constraints.

Claimed Contributions

Native Adaptive Expansion (NAE) paradigm for diffusion-based CO

The authors introduce a framework that embeds adaptive expansion directly into the diffusion process as an intrinsic decoding principle, rather than as an external wrapper around global predictors. This makes intermediate diffusion states meaningful partial solutions with instance-adaptive progress.

10 retrieved papers
CO-specific masked diffusion with time-agnostic denoiser

The framework uses a corruption process that masks only selected variables (1s) while preserving unselected ones (0s), coupled with a time-agnostic graph neural network denoiser trained under optimization consistency. This ensures intermediate states remain valid partial solutions aligned with combinatorial feasibility.

2 retrieved papers
Solution expansion inference scheme with feasibility projection

At inference time, the method progressively expands partial solutions by selecting high-confidence variables and enforcing problem-specific constraints through feasibility projection. This achieves O(Ts) complexity compared to O(Ds·Ts) for wrapper-based approaches while maintaining solution quality.

1 retrieved paper

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

Native Adaptive Expansion (NAE) paradigm for diffusion-based CO

The authors introduce a framework that embeds adaptive expansion directly into the diffusion process as an intrinsic decoding principle, rather than as an external wrapper around global predictors. This makes intermediate diffusion states meaningful partial solutions with instance-adaptive progress.

Contribution

CO-specific masked diffusion with time-agnostic denoiser

The framework uses a corruption process that masks only selected variables (1s) while preserving unselected ones (0s), coupled with a time-agnostic graph neural network denoiser trained under optimization consistency. This ensures intermediate states remain valid partial solutions aligned with combinatorial feasibility.

Contribution

Solution expansion inference scheme with feasibility projection

At inference time, the method progressively expands partial solutions by selecting high-confidence variables and enforcing problem-specific constraints through feasibility projection. This achieves O(Ts) complexity compared to O(Ds·Ts) for wrapper-based approaches while maintaining solution quality.