Riemannian High-Order Pooling for Brain Foundation Models

ICLR 2026 Conference SubmissionAnonymous Authors
EEGbrain-computer interfacerepresentation learningmanifold learning
Abstract:

Electroencephalography (EEG) is a noninvasive technique for measuring brain electrical activity that supports a wide range of brain-computer interaction applications. Motivated by the breakthroughs of Large Language Models (LLMs), recent efforts have begun to explore Large EEG foundation Models trained on broad unlabeled corpora. However, most advances focus on improving the backbone while neglecting the classification head. Existing models often rely on a single class token, underutilizing the spatiotemporal structure and second-order statistics that are crucial for EEG decoding. We propose Riemannian High Order Pooling (RHOP), a plug-and-play module that injects principled Riemannian statistics into the classifier. RHOP maps each token to a quotient Gaussian jointly encoding mean and second-order information, yielding scale-invariant descriptors. Tokens are then aggregated by estimating a Riemannian Gaussian on the SPD manifold, where the Fréchet mean and covariance are embedded into an SPD descriptor. The resulting normalized vector is fused with the class token for prediction. RHOP is backbone-agnostic and integrates with modern EEG foundation models, e.g., BIOT and LaBraM. Across diverse EEG benchmarks, it improves accuracy, robustness, and efficiency under full fine-tuning, linear probing, and from-scratch training settings.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper proposes Riemannian High-Order Pooling (RHOP), a plug-and-play module that enhances EEG foundation model classifiers by injecting Riemannian geometric statistics into the classification head. It occupies a unique leaf in the taxonomy—'Foundation Models with Riemannian High-Order Pooling'—with no sibling papers, indicating this is a newly emerging research direction. The taxonomy contains 36 papers across multiple established branches (spatial filtering, deep networks, classifiers, preprocessing), yet the foundation model integration of Riemannian pooling appears to be an unexplored niche within this broader landscape.

The taxonomy reveals several neighboring directions: deep Riemannian networks that build manifold-aware layers from scratch (e.g., SPDNet variants), transformer architectures with second-order pooling that capture high-order dependencies, and hybrid models fusing Riemannian and Euclidean representations. RHOP diverges by targeting pretrained foundation models (BIOT, LaBraM) rather than training end-to-end architectures, positioning itself at the intersection of large-scale pretraining and geometric manifold learning. The absence of papers in its leaf suggests this integration strategy—retrofitting foundation models with Riemannian pooling—has not been systematically explored in prior work.

Among 20 candidates examined across three contributions, none were flagged as clearly refuting the proposed methods. The Quotient Gaussian embedding examined 1 candidate with no refutations, the RHOP module examined 10 candidates with no refutations, and the empirical validation framework examined 9 candidates with no refutations. This limited search scope—top-K semantic matches plus citation expansion—suggests that within the examined literature, no direct prior work implements quotient Gaussian embeddings or Riemannian pooling specifically for foundation model classification heads, though the analysis does not claim exhaustive coverage of all possible related work.

Based on the 20-candidate search, the work appears to occupy a sparse intersection between foundation models and Riemannian geometry. The taxonomy structure confirms that while Riemannian EEG methods are well-established, their integration into large-scale pretrained models is nascent. The analysis covers top semantic matches and citations but does not guarantee discovery of all relevant preprints, concurrent work, or domain-specific applications that may overlap with the proposed approach.

Taxonomy

Core-task Taxonomy Papers
36
3
Claimed Contributions
20
Contribution Candidate Papers Compared
0
Refutable Paper

Research Landscape Overview

Core task: Improving EEG foundation model classification through Riemannian geometric pooling. The field of EEG decoding has increasingly embraced Riemannian geometry to exploit the symmetric positive-definite structure of covariance matrices, yielding a rich taxonomy of approaches. At the broadest level, the landscape divides into several complementary directions: spatial filtering and feature extraction methods that leverage manifold-aware transformations (e.g., RSF Spatial Filtering[1]), deep network architectures that embed Riemannian layers for end-to-end learning (e.g., Riemannian Geometry Networks[2], Deep Riemannian Networks[9]), classifier frameworks and ensemble strategies that combine multiple Riemannian pipelines (e.g., FilterBank CSP Ensemble[7]), preprocessing and domain adaptation techniques to handle cross-session or cross-subject variability (e.g., XDAWN Transfer Learning[10], Seizure Detection Transfer[12]), hybrid models that fuse Riemannian and Euclidean representations (e.g., CNN Riemannian Hybrid[20]), application-specific solutions targeting motor imagery, emotion recognition, or neuromarketing (e.g., Deep Riemannian Neuromarketing[28]), optimization and computational tools for efficient manifold operations (e.g., pyRiemann-qiskit[5]), and emerging foundation models that integrate high-order pooling strategies to capture richer geometric structure. Recent work has explored how to scale Riemannian methods beyond hand-crafted pipelines, with deep architectures (Deep Riemannian EEG[11], Discriminative SPD Learning[13]) learning task-specific manifold embeddings and hybrid approaches (Multiscale Convolutional Fusion[26]) blending spatial and temporal cues. A key tension lies between interpretability—classical spatial filters remain transparent—and representational power, where deep networks can discover complex patterns at the cost of opacity. Riemannian High-Order Pooling[0] sits within the foundation model branch, aiming to enhance large-scale pretrained EEG classifiers by incorporating geometric pooling that respects the manifold structure of covariance features. This contrasts with earlier deep Riemannian works like Deep Riemannian Networks[9], which focus on building manifold-aware layers from scratch, and with transformer-based methods such as Transformer Second-Order Pooling[8], which also leverage second-order statistics but may not fully exploit Riemannian metrics. By integrating high-order pooling into foundation models, Riemannian High-Order Pooling[0] bridges the gap between classical geometry-driven pipelines and modern large-scale pretraining paradigms.

Claimed Contributions

Quotient Gaussian Embedding for Scale-Invariant EEG Representations

The authors introduce a quotient Gaussian embedding that normalizes per-token covariances to correlation form, removing temporal scale discrepancies while preserving dependency structure. This embedding jointly encodes mean and second-order statistics, providing scale-invariant descriptors for EEG features.

1 retrieved paper
Riemannian High-Order Pooling Module

The authors propose RHOP, a plug-and-play geometry-aware pooling head that aggregates token information by estimating a Riemannian Gaussian on the SPD manifold. This module preserves spatiotemporal structure and captures high-order dependencies through an SPD descriptor, addressing limitations of conventional global pooling methods.

10 retrieved papers
Comprehensive Empirical Validation Framework

The authors provide extensive experimental validation demonstrating that RHOP improves accuracy, robustness, and efficiency across diverse EEG benchmarks. The validation covers multiple training settings including full fine-tuning, linear probing, and training from scratch with modern foundation models.

9 retrieved papers

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Within the taxonomy built over the current TopK core-task papers, the original paper is assigned to a leaf with no direct siblings and no cousin branches under the same grandparent topic. In this retrieved landscape, it appears structurally isolated, which is one partial signal of novelty, but still constrained by search coverage and taxonomy granularity.

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

Quotient Gaussian Embedding for Scale-Invariant EEG Representations

The authors introduce a quotient Gaussian embedding that normalizes per-token covariances to correlation form, removing temporal scale discrepancies while preserving dependency structure. This embedding jointly encodes mean and second-order statistics, providing scale-invariant descriptors for EEG features.

Contribution

Riemannian High-Order Pooling Module

The authors propose RHOP, a plug-and-play geometry-aware pooling head that aggregates token information by estimating a Riemannian Gaussian on the SPD manifold. This module preserves spatiotemporal structure and captures high-order dependencies through an SPD descriptor, addressing limitations of conventional global pooling methods.

Contribution

Comprehensive Empirical Validation Framework

The authors provide extensive experimental validation demonstrating that RHOP improves accuracy, robustness, and efficiency across diverse EEG benchmarks. The validation covers multiple training settings including full fine-tuning, linear probing, and training from scratch with modern foundation models.