Directed Semi-Simplicial Learning with Applications to Brain Activity Decoding
Overview
Taxonomy
Research Landscape Overview
Claimed Contributions
The authors propose SSNs, a novel class of Topological Deep Learning architectures that operate on semi-simplicial sets and leverage face-map–induced relations to capture directed higher-order motifs. They prove SSNs are strictly more expressive than message-passing GNNs, Directed GNNs, and Message-Passing Simplicial Neural Networks in the Weisfeiler-Leman hierarchy.
The authors introduce Dynamical Activity Complexes (DACs), which are directed simplicial complexes with time-evolving binary features encoding neuronal co-activation. They formally prove that SSNs operating on DACs can recover a broader class of topological invariants known to characterize brain network activity, which existing graph and TDL models cannot.
The authors introduce Routing-SSNs (R-SSNs), which employ a learnable gating mechanism to dynamically select the top-k most relevant relations from predefined relation classes. This addresses scalability and efficiency by reducing parameter count and inference time while maintaining competitive performance.
Core Task Comparisons
Comparisons with papers in the same taxonomy category
[13] Higher-Order Topological Directionality and Directed Simplicial Neural Networks PDF
[18] Towards a Quantitative Theory of Digraph-Based Complexes and its Applications in Brain Network Analysis PDF
Contribution Analysis
Detailed comparisons for each claimed contribution
Semi-Simplicial Neural Networks (SSNs)
The authors propose SSNs, a novel class of Topological Deep Learning architectures that operate on semi-simplicial sets and leverage face-map–induced relations to capture directed higher-order motifs. They prove SSNs are strictly more expressive than message-passing GNNs, Directed GNNs, and Message-Passing Simplicial Neural Networks in the Weisfeiler-Leman hierarchy.
Topology-grounded framework for brain dynamics representation learning
The authors introduce Dynamical Activity Complexes (DACs), which are directed simplicial complexes with time-evolving binary features encoding neuronal co-activation. They formally prove that SSNs operating on DACs can recover a broader class of topological invariants known to characterize brain network activity, which existing graph and TDL models cannot.
[38] Multiscale Simplicial Complex Entropy Analysis of Heartbeat Dynamics PDF
[39] Higher-order connection Laplacians for directed simplicial complexes PDF
[40] Directed simplicial complexes in brain real-world networks PDF
[41] Simplicial complexes: higher-order spectral dimension and dynamics PDF
[42] Stability of synchronization in simplicial complexes PDF
[43] Principled simplicial neural networks for trajectory prediction PDF
[44] Neurospectrum: A Geometric and Topological Deep Learning Framework for Uncovering Spatiotemporal Signatures in Neural Activity PDF
[45] Geometric and topological inference for deep representations of complex networks PDF
[46] From Density to Void: Why Brain Networks Fail to Reveal Complex Higher-Order Structures PDF
[47] Simplicial and topological descriptions of human brain dynamics PDF
Routing-SSNs for scalable relation selection
The authors introduce Routing-SSNs (R-SSNs), which employ a learnable gating mechanism to dynamically select the top-k most relevant relations from predefined relation classes. This addresses scalability and efficiency by reducing parameter count and inference time while maintaining competitive performance.