LogicXGNN: Grounded Logical Rules for Explaining Graph Neural Networks

ICLR 2026 Conference SubmissionAnonymous Authors
Graph Neural NetworksInterpretabilityExplainabilityNeural-symbolicLogical RulesAI for ScienceXAI
Abstract:

Existing rule-based explanations for Graph Neural Networks (GNNs) provide global interpretability but often optimize and assess fidelity in an intermediate, uninterpretable concept space, overlooking the grounding quality of the final subgraph explanations for end users. This gap yields explanations that may appear faithful yet be unreliable in practice. To this end, we propose LogicXGNN, a post hoc framework that constructs logical rules over reliable predicates explicitly designed to capture the GNN's message-passing structure, thereby ensuring effective grounding. We further introduce data-grounded fidelity (FidDFid_D), a realistic metric that evaluates explanations in their final-graph form, along with complementary utility metrics such as coverage and validity. Across extensive experiments, LogicXGNN improves FidDFid_D by over 20% on average relative to state-of-the-art methods while being 10-100 times faster. With strong scalability and utility performance, LogicXGNN produces explanations that are faithful to the model's logic and reliably grounded in observable data.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper proposes LogicXGNN, a framework for extracting global logical rules from trained GNN models to explain predictions. It resides in the 'Global Rule Extraction from GNN Behavior' leaf, which contains four papers including the original work. This leaf sits within the broader 'Rule Extraction and Logic-Based Explanation Generation' branch, indicating a moderately populated research direction. The taxonomy shows this is an active area with multiple complementary approaches, though not as crowded as some domain-specific application categories.

The taxonomy reveals several neighboring research directions. Adjacent leaves include 'Logic Formula and Symbolic Representation Extraction' (3 papers) and 'Path-Based and Subgraph Rule Explanation' (3 papers), both focused on symbolic explanation but with different structural emphases. The broader taxonomy also shows parallel branches in 'Concept-Based and Neuron-Level Interpretability' (4 papers) and 'Explanation Evaluation and Validation Frameworks' (6 papers). LogicXGNN bridges rule extraction with evaluation concerns by introducing data-grounded fidelity, connecting to the validation framework branch while remaining rooted in symbolic rule generation.

Among 30 candidates examined across three contributions, no clearly refuting prior work was identified. The data-grounded fidelity metric examined 10 candidates with 0 refutable, suggesting this evaluation approach may be relatively novel within the limited search scope. Similarly, the LogicXGNN framework and reliable predicates design each examined 10 candidates without finding overlapping prior work. However, this analysis reflects top-K semantic search results, not an exhaustive literature review, so the absence of refutation indicates novelty within the examined candidate set rather than absolute originality across all published work.

Based on the limited search scope of 30 semantically similar papers, the work appears to introduce distinct contributions in both methodology and evaluation. The taxonomy position in a moderately populated leaf suggests the paper addresses an established problem with fresh techniques. The lack of refuting candidates across all three contributions, while not definitive proof of novelty, indicates the specific combination of data-grounded evaluation and message-passing-aware predicates may represent a meaningful advance within the examined literature.

Taxonomy

Core-task Taxonomy Papers
50
3
Claimed Contributions
30
Contribution Candidate Papers Compared
0
Refutable Paper

Research Landscape Overview

Core task: Explaining graph neural networks with logical rules. The field has organized itself around several complementary directions. Rule Extraction and Logic-Based Explanation Generation focuses on distilling symbolic rules from trained GNNs, either at a global level (capturing overall model behavior) or locally (explaining individual predictions). Concept-Based and Neuron-Level Interpretability examines what internal representations learn, often linking hidden activations to human-understandable concepts. Neural-Symbolic Integration and Rule-Guided Learning explores hybrid architectures that combine neural learning with symbolic reasoning, allowing logic to guide training or inference. Expressivity and Theoretical Foundations investigates the formal capabilities of GNNs in capturing logical structures, while Self-Explainable and Transparent GNN Architectures designs models that are interpretable by construction. Explanation Evaluation and Validation Frameworks addresses how to rigorously assess the quality and faithfulness of explanations, and Domain-Specific Applications and Hybrid Approaches tailors these techniques to specialized settings such as knowledge graphs or temporal reasoning. Within Rule Extraction, a particularly active line of work seeks to produce global logical summaries of GNN decision boundaries. LogicXGNN[0] exemplifies this direction by extracting interpretable rules that describe the model's overall behavior across the input space. Nearby efforts like Global Logic Explainability[10] and Extracting Logic Rules[11] similarly aim to distill symbolic patterns from trained networks, though they may differ in the granularity or formalism of the extracted rules. In contrast, works such as Global Concept Interpretability[3] shift focus from rule extraction to identifying high-level concepts encoded in neuron activations, offering a complementary lens on what the model has learned. The central tension across these branches is balancing expressiveness—capturing complex, nuanced patterns—with human readability, as overly detailed rules can become as opaque as the original neural network. LogicXGNN[0] sits squarely in the global rule extraction cluster, emphasizing symbolic summaries that remain interpretable while faithfully representing the GNN's learned logic.

Claimed Contributions

Data-grounded fidelity metric for evaluating rule-based GNN explanations

The authors propose a new evaluation metric called data-grounded fidelity (FidD) that assesses rule-based explanations directly on the final subgraph explanations presented to end users, rather than in an intermediate concept space. This metric is complemented by utility metrics including coverage and validity to provide a more realistic assessment of explanation quality.

10 retrieved papers
LogicXGNN framework for generating faithful logical rule-based explanations

The authors introduce LogicXGNN, a novel post-hoc explanation framework that generates logical rules using predicates specifically designed to capture structural patterns from the GNN's message-passing mechanism. This design ensures effective grounding of explanations in observable data, producing both representative subgraphs and generalizable grounding rules.

10 retrieved papers
Reliable predicates preserving GNN message-passing structure

The framework constructs predicates that explicitly model recurring structural patterns induced by the GNN's message-passing computation, using techniques like Weisfeiler-Lehman graph hashing to capture receptive field topologies. This approach addresses unreliable grounding issues in existing methods by ensuring predicates are both structurally grounded and model-faithful.

10 retrieved papers

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

Data-grounded fidelity metric for evaluating rule-based GNN explanations

The authors propose a new evaluation metric called data-grounded fidelity (FidD) that assesses rule-based explanations directly on the final subgraph explanations presented to end users, rather than in an intermediate concept space. This metric is complemented by utility metrics including coverage and validity to provide a more realistic assessment of explanation quality.

Contribution

LogicXGNN framework for generating faithful logical rule-based explanations

The authors introduce LogicXGNN, a novel post-hoc explanation framework that generates logical rules using predicates specifically designed to capture structural patterns from the GNN's message-passing mechanism. This design ensures effective grounding of explanations in observable data, producing both representative subgraphs and generalizable grounding rules.

Contribution

Reliable predicates preserving GNN message-passing structure

The framework constructs predicates that explicitly model recurring structural patterns induced by the GNN's message-passing computation, using techniques like Weisfeiler-Lehman graph hashing to capture receptive field topologies. This approach addresses unreliable grounding issues in existing methods by ensuring predicates are both structurally grounded and model-faithful.

LogicXGNN: Grounded Logical Rules for Explaining Graph Neural Networks | Novelty Validation