From Neural Networks to Logical Theories: The Correspondence between Fibring Modal Logics and Fibring Neural Networks

ICLR 2026 Conference SubmissionAnonymous Authors
fibringmodal logicslogical expressivenessgraph neural networkstransformer encoders
Abstract:

Fibring of modal logics is a well-established formalism for combining countable families of modal logics into a single fibred language with common semantics, characterized by fibred models. Inspired by this formalism, fibring of neural networks was introduced as a neurosymbolic framework for combining learning and reasoning in neural networks. Fibring of neural networks uses the (pre-)activations of a trained network to evaluate a fibring function computing the weights of another network whose outputs are injected back into the original network. However, the exact correspondence between fibring of neural networks and fibring of modal logics was never formally established. In this paper, we close this gap by formalizing the idea of fibred models compatible with fibred neural networks. Using this correspondence, we then derive non-uniform logical expressiveness results for Graph Neural Networks (GNNs), Graph Attention Networks (GATs) and Transformer encoders. Longer-term, the goal of this paper is to open the way for the use of fibring as a formalism for interpreting the logical theories learnt by neural networks with the tools of computational logic.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper establishes a formal correspondence between fibring of modal logics and fibring of neural networks, deriving expressiveness results for GNNs, GATs, and Transformer encoders. It resides in the 'Formal Correspondence and Expressiveness' leaf, which contains only two papers total (including this one). This indicates a relatively sparse research direction within the broader taxonomy of seven papers across three main branches. The sibling paper in this leaf shares the focus on proving formal equivalences between fibred modal logics and fibred neural networks, suggesting this is an emerging subfield with limited prior work.

The taxonomy reveals that the paper sits within 'Foundational Fibring Theory and Formalization', which contrasts with neighboring branches focused on applications (Modal and Temporal Reasoning Systems, Cognitive Integration) and semantic foundations (Neural Network Semantics and Learning Policies). The scope note for the paper's leaf explicitly excludes general fibring methodology without formal correspondence proofs, positioning this work as more theoretically rigorous than the adjacent 'Methodological Frameworks for Network Fibring' leaf. The taxonomy structure suggests the paper bridges foundational theory with expressiveness analysis, a direction less explored than applied reasoning systems.

Among twenty-eight candidates examined, none were found to clearly refute any of the three main contributions. The formal correspondence contribution examined eight candidates with zero refutable matches; the expressiveness results for GNNs/GATs/Transformers examined ten candidates with zero refutations; and the redefinition of fibring for modern architectures also examined ten candidates with zero refutations. This suggests that within the limited search scope, the specific combination of formal correspondence proofs and non-uniform expressiveness results for these particular architectures appears relatively unexplored. However, the small candidate pool means the analysis cannot rule out relevant work outside the top-K semantic matches.

Based on the limited search of twenty-eight candidates and the sparse taxonomy leaf (two papers total), the work appears to occupy a relatively novel position at the intersection of formal logic and neural network expressiveness. The absence of refutable candidates across all contributions suggests the specific technical approach is distinct within the examined literature, though the small search scope and emerging nature of the field mean this assessment is necessarily provisional and subject to revision with broader literature coverage.

Taxonomy

Core-task Taxonomy Papers
7
3
Claimed Contributions
28
Contribution Candidate Papers Compared
0
Refutable Paper

Research Landscape Overview

Core task: Correspondence between fibring modal logics and fibring neural networks. This emerging field explores how modular composition techniques from logic can be mirrored in neural architectures, creating a bridge between symbolic reasoning and connectionist learning. The taxonomy reveals three main branches: Foundational Fibring Theory and Formalization, which develops the mathematical machinery for combining logical systems and their neural counterparts; Application Domains and Reasoning Systems, which examines how these hybrid approaches tackle specific reasoning tasks; and Semantic Foundations and Learning Theory, which investigates the theoretical underpinnings of how neural networks can faithfully represent logical semantics. Early works such as Fibring Neural Networks[1] and Network Fibring[6] established the basic compositional principles, while more recent efforts like Neural Network Semantics[3] have refined the formal correspondence between logical operators and network structures. Within the foundational branch, several contrasting themes emerge around expressiveness versus tractability: some studies emphasize rigorous formal guarantees that neural fibring preserves logical properties, while others prioritize practical scalability in complex reasoning scenarios. Fibring Modal Logics[0] sits squarely within the Formal Correspondence and Expressiveness cluster, focusing on establishing precise mappings between modal logic fibring operations and their neural analogs. This positions it closely to Neural Network Semantics[3], which similarly pursues formal semantic grounding, though Fibring Modal Logics[0] appears to place greater emphasis on modal operators specifically. In contrast, works like Modal Temporal Reasoning[5] and Neural Symbolic Cognitive[2] lean more toward application-driven integration, exploring how fibred systems can handle temporal or cognitive tasks. The central open question remains how to balance the compositional elegance of fibring with the learning efficiency required for real-world deployment.

Claimed Contributions

Formal correspondence between fibring neural networks and fibring modal logics

The authors establish an exact formal correspondence between fibring of neural networks and fibring of modal logics by defining fibred models compatible with fibred neural networks and proving that the class of compatible fibred models forms a valid fibred logic.

8 retrieved papers
Non-uniform logical expressiveness results for GNNs, GATs, and Transformer encoders

The authors prove that fibred neural networks can non-uniformly describe GNNs, GATs, and Transformer encoders, providing a countable family of formulas in the corresponding fibred logic that characterizes each network instance.

10 retrieved papers
Redefinition of fibring neural networks for modern architectures

The authors provide a generalized definition of fibred neural networks that extends the original formalism to any number and combinations of neural networks, making it applicable to modern architectures like GNNs and Transformers.

10 retrieved papers

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

Formal correspondence between fibring neural networks and fibring modal logics

The authors establish an exact formal correspondence between fibring of neural networks and fibring of modal logics by defining fibred models compatible with fibred neural networks and proving that the class of compatible fibred models forms a valid fibred logic.

Contribution

Non-uniform logical expressiveness results for GNNs, GATs, and Transformer encoders

The authors prove that fibred neural networks can non-uniformly describe GNNs, GATs, and Transformer encoders, providing a countable family of formulas in the corresponding fibred logic that characterizes each network instance.

Contribution

Redefinition of fibring neural networks for modern architectures

The authors provide a generalized definition of fibred neural networks that extends the original formalism to any number and combinations of neural networks, making it applicable to modern architectures like GNNs and Transformers.