10
0

CRIA: A Cross-View Interaction and Instance-Adapted Pre-training Framework for Generalizable EEG Representations

Main:32 Pages
7 Figures
Bibliography:1 Pages
3 Tables
Appendix:1 Pages
Abstract

The difficulty of extracting deep features from EEG data and effectively integrating information from multiple views presents significant challenges for developing a generalizable pretraining framework for EEG representation learning. However, most existing pre-training methods rely solely on the contextual semantics of a single view, failing to capture the complex and synergistic interactions among different perspectives, limiting the expressiveness and generalization of learned representations. To address these issues, this paper proposes CRIA, an adaptive framework that utilizes variable-length and variable-channel coding to achieve a unified representation of EEG data across different datasets. In this work, we define cross-view information as the integrated representation that emerges from the interaction among temporal, spectral, and spatial views of EEG signals. The model employs a cross-attention mechanism to fuse temporal, spectral, and spatial features effectively, and combines an attention matrix masking strategy based on the information bottleneck principle with a novel viewpoint masking pre-training scheme. Experimental results on the Temple University EEG corpus and the CHB-MIT dataset show that CRIA outperforms existing methods with the same pre-training conditions, achieving a balanced accuracy of 57.02% for multi-class event classification and 80.03% for anomaly detection, highlighting its strong generalization ability.

View on arXiv
@article{liu2025_2506.16056,
  title={ CRIA: A Cross-View Interaction and Instance-Adapted Pre-training Framework for Generalizable EEG Representations },
  author={ Puchun Liu and C. L. Philip Chen and Yubin He and Tong Zhang },
  journal={arXiv preprint arXiv:2506.16056},
  year={ 2025 }
}
Comments on this paper