Directed Semi-Simplicial Learning with Applications to Brain Activity Decoding

ICLR 2026 Conference SubmissionAnonymous Authors
Topological Deep LearningGraph Neural Networks
Abstract:

Graph Neural Networks (GNNs) excel at learning from pairwise interactions but often overlook multi-way and hierarchical relationships. Topological Deep Learning (TDL) addresses this limitation by leveraging combinatorial topological spaces, such as simplicial or cell complexes. However, existing TDL models are restricted to undirected settings and fail to capture the higher-order directed patterns prevalent in many complex systems, e.g., brain networks, where such interactions are both abundant and functionally significant. To fill this gap, we introduce Semi-Simplicial Neural Networks (SSNs), a principled class of TDL models that operate on semi-simplicial sets---combinatorial structures that encode directed higher-order motifs and their directional relationships. To enhance scalability, we propose Routing-SSNs, which dynamically select the most informative relations in a learnable manner. We theoretically characterize SSNs by proving they are strictly more expressive than standard graph and TDL models, and they are able to recover several topological descriptors. Building on previous evidence that such descriptors are critical for characterizing brain activity, we then introduce a new principled framework for brain dynamics representation learning centered on SSNs. Empirically, we test SSNs on 4 distinct tasks across 13 datasets, spanning from brain dynamics to node classification, showing competitive performance. Notably, SSNs consistently achieve state-of-the-art performance on brain dynamics classification tasks, outperforming the second-best model by up to 27%, and message passing GNNs by up to 50% in accuracy. Our results highlight the potential of topological models for learning from structured brain data, establishing a unique real-world case study for TDL. Code and data are uploaded as supplementary material.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Taxonomy

Core-task Taxonomy Papers
26
3
Claimed Contributions
22
Contribution Candidate Papers Compared
1
Refutable Paper

Research Landscape Overview

Core task: learning from directed higher-order structures in brain networks. The field has evolved to address the challenge that traditional graph models often overlook the directional and multi-way interactions inherent in neural connectivity. The taxonomy reflects this evolution through several main branches. Directed Higher-Order Topological Frameworks develop mathematical formalisms—such as simplicial and semi-simplicial complexes—that explicitly encode directional flow and higher-order dependencies, as seen in works like Higher-Order Topological Directionality[13] and Digraph-Based Complexes[18]. Brain Network Analysis and Neuroimaging Applications focus on translating these abstractions into practical tools for understanding neural pathways and cognitive states, exemplified by Neural Pathway Transformer[3] and Spatial Craving Patterns[1]. Meanwhile, Knowledge Graph Completion and Temporal Reasoning, General Graph Neural Network Architectures, and Specialized Network Learning Paradigms address complementary challenges—ranging from temporal dynamics and heterogeneous data integration to scalable message-passing schemes—that arise when modeling complex relational structures beyond the brain. A particularly active line of work centers on extending classical topological methods to capture directional flows in simplicial structures, where the interplay between algebraic topology and neural computation remains an open question. Directed Semi-Simplicial Learning[0] sits squarely within this cluster, proposing a framework that generalizes semi-simplicial complexes to directed settings. It shares conceptual ground with Higher-Order Topological Directionality[13], which also emphasizes directional higher-order features, and with Digraph-Based Complexes[18], which explores alternative combinatorial constructions for directed graphs. The main trade-off across these approaches involves balancing expressive power—capturing intricate directional motifs—against computational tractability and interpretability in neuroimaging contexts. While some methods prioritize rigorous topological guarantees, others lean toward flexible neural architectures that can adapt to heterogeneous brain data, leaving the optimal synthesis of theory and practice an ongoing area of exploration.

Claimed Contributions

Semi-Simplicial Neural Networks (SSNs)

The authors propose SSNs, a novel class of Topological Deep Learning architectures that operate on semi-simplicial sets and leverage face-map–induced relations to capture directed higher-order motifs. They prove SSNs are strictly more expressive than message-passing GNNs, Directed GNNs, and Message-Passing Simplicial Neural Networks in the Weisfeiler-Leman hierarchy.

2 retrieved papers
Can Refute
Topology-grounded framework for brain dynamics representation learning

The authors introduce Dynamical Activity Complexes (DACs), which are directed simplicial complexes with time-evolving binary features encoding neuronal co-activation. They formally prove that SSNs operating on DACs can recover a broader class of topological invariants known to characterize brain network activity, which existing graph and TDL models cannot.

10 retrieved papers
Routing-SSNs for scalable relation selection

The authors introduce Routing-SSNs (R-SSNs), which employ a learnable gating mechanism to dynamically select the top-k most relevant relations from predefined relation classes. This addresses scalability and efficiency by reducing parameter count and inference time while maintaining competitive performance.

10 retrieved papers

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

Semi-Simplicial Neural Networks (SSNs)

The authors propose SSNs, a novel class of Topological Deep Learning architectures that operate on semi-simplicial sets and leverage face-map–induced relations to capture directed higher-order motifs. They prove SSNs are strictly more expressive than message-passing GNNs, Directed GNNs, and Message-Passing Simplicial Neural Networks in the Weisfeiler-Leman hierarchy.

Contribution

Topology-grounded framework for brain dynamics representation learning

The authors introduce Dynamical Activity Complexes (DACs), which are directed simplicial complexes with time-evolving binary features encoding neuronal co-activation. They formally prove that SSNs operating on DACs can recover a broader class of topological invariants known to characterize brain network activity, which existing graph and TDL models cannot.

Contribution

Routing-SSNs for scalable relation selection

The authors introduce Routing-SSNs (R-SSNs), which employ a learnable gating mechanism to dynamically select the top-k most relevant relations from predefined relation classes. This addresses scalability and efficiency by reducing parameter count and inference time while maintaining competitive performance.