Multi-Domain Transferable Graph Gluing for Building Graph Foundation Models

ICLR 2026 Conference SubmissionAnonymous Authors
Multi-domain graph pre-traininggraph neural networkgraph foundation modelRiemannian geometry
Abstract:

Multi-domain graph pre-training integrates knowledge from diverse domains to enhance performance in the target domains, which is crucial for building graph foundation models. Despite initial success, existing solutions often fall short of answering a fundamental question: how is knowledge integrated or transferred across domains? This theoretical limitation motivates us to rethink the consistency and transferability between the pre-trained model and target domains. In this paper, we propose a fresh differential geometry perspective, whose core idea is to merge any graph dataset into a unified, smooth Riemannian manifold, enabling a systematic understanding of knowledge integration and transfer. To achieve this, our key contribution is the theoretical establishment of neural manifold gluing, which first characterizes local geometry using an adaptive orthogonal frame and then “glues” the local pieces together into a coherent whole. Building on this theory, we present the GraphGlue framework, which supports batched pre-training with EMA prototyping and provides a transferability measure based on geometric consistence. Extensive experiments demonstrate its superior performance across diverse graph domains. Moreover, we empirically validated GraphGlue’s geometric scaling law, showing that larger quantities of datasets improve model transferability by producing a smoother manifold.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper proposes a differential geometry framework for multi-domain graph pre-training, introducing neural manifold gluing theory to integrate diverse graph datasets into a unified Riemannian manifold. It resides in the Text-Free Multi-Domain Graph Foundation Models leaf, which contains five papers total, indicating a moderately populated research direction. This leaf focuses on foundation models addressing topology alignment and domain divergence without textual attributes, distinguishing it from text-attributed approaches and prompt-based methods that dominate neighboring branches.

The taxonomy reveals that this work sits within the broader Graph Foundation Models and Universal Pre-Training Frameworks branch, which also includes Text-Attributed Graph Foundation Models and Scalable Multi-Graph Pre-Training Architectures. Neighboring branches emphasize prompt tuning, alignment techniques, and domain adaptation strategies. The paper's geometric perspective diverges from sibling works that typically employ contrastive learning, prototype-based alignment, or adversarial training. The scope note clarifies this leaf excludes text-attributed methods and domain-specific adaptation, positioning the work as addressing fundamental structural integration challenges rather than feature-level alignment.

Among twenty-seven candidates examined across three contributions, none were identified as clearly refuting the proposed ideas. The neural manifold gluing theory examined ten candidates with zero refutations, the GraphGlue framework examined seven candidates with zero refutations, and the geometric scaling law examined ten candidates with zero refutations. This suggests that within the limited search scope—primarily top-K semantic matches and citation expansion—the differential geometry framing and manifold-based integration approach appear distinct from existing methods. However, the analysis explicitly notes this is not an exhaustive literature search.

Based on the limited examination of twenty-seven candidates, the work appears to introduce a novel theoretical lens through differential geometry that is not prominently represented in the immediate literature. The absence of refutable prior work among examined candidates suggests potential originality, though this assessment is constrained by the search scope and does not preclude the existence of related geometric approaches in the broader graph learning literature beyond the examined set.

Taxonomy

Core-task Taxonomy Papers
50
3
Claimed Contributions
27
Contribution Candidate Papers Compared
0
Refutable Paper

Research Landscape Overview

Core task: Multi-domain graph pre-training and cross-domain transfer learning. This field addresses the challenge of building graph neural network models that can leverage knowledge from multiple graph domains and transfer learned representations to new, potentially unseen domains. The taxonomy reveals several major branches: Graph Foundation Models and Universal Pre-Training Frameworks focus on building unified architectures that can handle diverse graph structures without domain-specific customization, often employing techniques like GCC[25] or more recent approaches such as GraphFM[11] and All in One[10]. Prompt-Based and Alignment-Based Transfer Learning explores methods like GPPT[5] and Uniform Graph Prompting[2] that adapt pre-trained models through prompting mechanisms or alignment strategies such as Topology Alignment[8]. Domain Adaptation and Cross-Network Transfer tackles the problem of bridging structural and distributional gaps between source and target graphs, while branches like Cross-Domain Recommendation Systems and Knowledge Graph Pre-Training address specific application contexts. Self-Supervised and Contrastive Graph Pre-Training emphasizes learning transferable representations without labeled data, and Federated and Decentralized Graph Learning considers privacy-preserving multi-domain scenarios. A particularly active line of work centers on text-free multi-domain foundation models that avoid reliance on textual attributes, enabling broader applicability across domains where rich node features may be unavailable. Graph Gluing[0] sits within this cluster, alongside works like Text-free Multi-domain[44], SAMGPT[6], and Unified Multi-domain[24], all emphasizing universal pre-training without textual scaffolding. Compared to prompt-based approaches like Multi-domain KG Prompting[3] or alignment-focused methods such as Topology Alignment[8], these text-free frameworks prioritize structural learning and domain-agnostic representations. A key tension across branches involves balancing expressiveness—capturing domain-specific nuances—with generalizability to new graph types. Graph Gluing[0] addresses this by developing mechanisms to integrate knowledge across heterogeneous graph structures, contrasting with more specialized transfer methods like Adversarial Graph Transfer[12] or meta-learning approaches such as Multi-Domain Meta Learning[13] that require explicit source-target pairing.

Claimed Contributions

Neural manifold gluing theory for multi-domain graph integration

The authors establish a novel theory called neural manifold gluing that merges arbitrary graph datasets into a unified, smooth Riemannian manifold. This theory characterizes local geometry using adaptive orthogonal frames and then glues local pieces together through metric compatibility and holonomy concepts, providing a principled framework for understanding knowledge integration and transfer across domains.

10 retrieved papers
GRAPHGLUE framework with batched pre-training and transferability measure

The authors design a pre-training-adaptation framework called GRAPHGLUE that implements the neural manifold gluing theory. It includes EMA prototyping for efficient batched pre-training, learnable prompts with Riemannian Mixture-of-Experts for adaptation, and a Geometric Transfer Metric (GTM) that naturally quantifies transfer difficulty based on geometric consistency.

7 retrieved papers
Geometric scaling law for graph foundation models

The authors demonstrate that GRAPHGLUE exhibits a geometric scaling law where increasing the quantity of pre-training datasets produces a smoother manifold, thereby improving model transferability. This scaling behavior is both theoretically motivated and empirically validated through experiments.

10 retrieved papers

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

Neural manifold gluing theory for multi-domain graph integration

The authors establish a novel theory called neural manifold gluing that merges arbitrary graph datasets into a unified, smooth Riemannian manifold. This theory characterizes local geometry using adaptive orthogonal frames and then glues local pieces together through metric compatibility and holonomy concepts, providing a principled framework for understanding knowledge integration and transfer across domains.

Contribution

GRAPHGLUE framework with batched pre-training and transferability measure

The authors design a pre-training-adaptation framework called GRAPHGLUE that implements the neural manifold gluing theory. It includes EMA prototyping for efficient batched pre-training, learnable prompts with Riemannian Mixture-of-Experts for adaptation, and a Geometric Transfer Metric (GTM) that naturally quantifies transfer difficulty based on geometric consistency.

Contribution

Geometric scaling law for graph foundation models

The authors demonstrate that GRAPHGLUE exhibits a geometric scaling law where increasing the quantity of pre-training datasets produces a smoother manifold, thereby improving model transferability. This scaling behavior is both theoretically motivated and empirically validated through experiments.

Multi-Domain Transferable Graph Gluing for Building Graph Foundation Models | Novelty Validation