Neural Graduated Assignment for Maximum Common Edge Subgraphs

ICLR 2026 Conference SubmissionAnonymous Authors
maximum common edge subgraphsquadratic assignment programminggraph similaritygraduated assignment
Abstract:

The Maximum Common Edge Subgraph (MCES) problem is a crucial challenge with significant implications in domains such as biology and chemistry. Traditional approaches, which include transformations into max-clique and search-based algorithms, suffer from scalability issues when dealing with larger instances. This paper introduces ``Neural Graduated Assignment'' (NGA), a simple, scalable, unsupervised-training-based method that addresses these limitations. Central to NGA is stacking of differentiable assignment optimization with neural components, enabling high-dimensional parameterization of the matching process through a learnable temperature mechanism. We further theoretically analyze the learning dynamics of NGA, showing its design leads to fast convergence, better exploration-exploitation tradeoff, and ability to escape local optima. Extensive experiments across MCES computation, graph similarity estimation, and graph retrieval tasks reveal that NGA not only significantly improves computation time and scalability on large instances but also enhances performance compared to existing methodologies. The introduction of NGA marks a significant advancement in the computation of MCES and offers insights into other assignment problems. Code is open-sourced at https://anonymous.4open.science/r/NGA-10E3.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper proposes Neural Graduated Assignment (NGA), a differentiable optimization framework combining neural components with graduated assignment for MCES computation. It resides in the 'Neural Search and Reinforcement Learning' leaf, which contains only three papers total, indicating a relatively sparse research direction within the broader taxonomy of fifty papers. This leaf sits under 'Learning-Based and Neural Approaches', distinguishing itself from exact algorithms, classical heuristics, and end-to-end similarity prediction methods that populate other branches of the field.

The taxonomy reveals that NGA's immediate neighbors include GLSearch and Learning to Search, both exploring neural guidance for combinatorial search. Broader context shows the field divides between exact methods (clique formulations, branch-and-bound), approximate heuristics (stable cores, metaheuristics), and emerging neural approaches. The 'Neural Search and Reinforcement Learning' scope explicitly focuses on GNN-based frameworks guiding search processes, while the sibling 'End-to-End Similarity and Retrieval' leaf addresses direct prediction tasks. NGA's graduated assignment approach positions it at the intersection of continuous optimization and discrete matching, diverging from purely discrete search policies.

Among seventeen candidates examined, the core NGA method shows one refutable candidate from four examined, while the unsupervised training framework (zero from six) and theoretical temperature analysis (zero from seven) appear less contested. The limited search scope—seventeen papers rather than hundreds—means these statistics reflect top semantic matches and citation neighbors, not exhaustive coverage. The graduated assignment mechanism and learnable temperature components appear more novel within this bounded sample, though the neural-guided search paradigm itself has established precedents in the examined literature.

Given the sparse three-paper leaf and modest seventeen-candidate search, NGA appears to occupy a relatively underexplored methodological niche combining continuous relaxation with neural parameterization. The analysis captures proximity to existing neural search work but cannot assess whether deeper literature contains closer antecedents. The theoretical contributions on temperature dynamics show no refutation among seven examined candidates, suggesting potential novelty within the sampled scope, though broader theoretical literature on graduated assignment remains outside this search boundary.

Taxonomy

Core-task Taxonomy Papers
50
3
Claimed Contributions
17
Contribution Candidate Papers Compared
1
Refutable Paper

Research Landscape Overview

Core task: maximum common edge subgraph computation. The field is organized around several complementary perspectives. Exact algorithms and theoretical foundations establish hardness results and provably optimal methods, while approximate and heuristic approaches trade guarantees for scalability on large instances. Learning-based and neural methods have emerged more recently, applying deep learning and reinforcement learning to guide search or directly predict subgraph correspondences. Parallel and high-performance computing branches address computational bottlenecks through distributed execution. Meanwhile, the taxonomy also reflects diverse problem formulations—such as connected variants or edge-contraction models—and a rich landscape of applications spanning molecular sciences, biological network analysis, pattern recognition, and network monitoring. Works like MCS Algorithms Review[15] and MCES Formulations[17] help anchor the methodological and modeling diversity, while tools such as CytoMCS[16] illustrate practical software implementations. Within the learning-based branch, a small but growing cluster explores neural search and reinforcement learning strategies that learn to navigate the combinatorial search space more efficiently than classical heuristics. GLSearch[10] and Learning to Search[42] exemplify efforts to train models that adaptively prune or prioritize candidate mappings. Neural Graduated Assignment[0] sits squarely in this line of work, employing a differentiable graduated assignment framework to iteratively refine correspondences via neural guidance. Compared to GLSearch[10], which focuses on learning search policies, Neural Graduated Assignment[0] emphasizes continuous relaxation and gradient-based optimization. This contrasts with purely discrete heuristics like those in Heuristics Chemical Graphs[32] or exact branch-and-cut methods such as Branch Cut MCES[20], highlighting an ongoing tension between interpretability, scalability, and solution quality as the community explores how neural architectures can complement or replace traditional combinatorial techniques.

Claimed Contributions

Neural Graduated Assignment (NGA) method for MCES

The authors introduce Neural Graduated Assignment (NGA), a novel neural-style algorithm that formulates MCES through Association Common Graph (ACG) construction and uses learnable temperature parameterization to approximate MCES solutions efficiently in polynomial time without exhaustive search.

4 retrieved papers
Can Refute
Unsupervised training framework for MCES

The method operates without requiring training data or supervision signals about the MCES itself (such as MCES structure or size), enabling efficient approximations when exact solutions are computationally infeasible.

6 retrieved papers
Theoretical analysis of learnable temperature dynamics

The authors provide theoretical analysis demonstrating how NGA's learnable temperature parameterization enables escaping local optima, achieves faster convergence through adaptive learning rates, and balances exploration-exploitation tradeoffs.

7 retrieved papers

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

Neural Graduated Assignment (NGA) method for MCES

The authors introduce Neural Graduated Assignment (NGA), a novel neural-style algorithm that formulates MCES through Association Common Graph (ACG) construction and uses learnable temperature parameterization to approximate MCES solutions efficiently in polynomial time without exhaustive search.

Contribution

Unsupervised training framework for MCES

The method operates without requiring training data or supervision signals about the MCES itself (such as MCES structure or size), enabling efficient approximations when exact solutions are computationally infeasible.

Contribution

Theoretical analysis of learnable temperature dynamics

The authors provide theoretical analysis demonstrating how NGA's learnable temperature parameterization enables escaping local optima, achieves faster convergence through adaptive learning rates, and balances exploration-exploitation tradeoffs.