266

TSception: Capturing Temporal Dynamics and Spatial Asymmetry from EEG for Emotion Recognition

IEEE Transactions on Affective Computing (TAC), 2021
Abstract

In this paper, we propose TSception, a multi-scale convolutional neural network, to learn temporal dynamics and spatial asymmetry from electroencephalogram (EEG). TSception consists of dynamic temporal, asymmetric spatial, and high-level fusion layers, which learn discriminative representations in the time and channel dimensions simultaneously. The dynamic temporal layer consists of multi-scale 1D convolutional kernels whose lengths are related to the sampling rate of the EEG signal, which learns the dynamic temporal and frequency representations of EEG. The asymmetric spatial layer takes advantage of the asymmetric neural activations underlying emotional responses, learning the discriminative global and hemisphere representations. The learned spatial representations will be fused by a high-level fusion layer. Using more generalized cross-validation settings, the proposed method is evaluated on two publicly available datasets DEAP and MAHNOB-HCI. The performance of the proposed network is compared with prior reported methods such as SVM, KNN, FBFgMDM, FBTSC, Unsupervised learning, DeepConvNet, ShallowConvNet, and EEGNet. Our method achieves higher classification accuracies and F1 scores than the compared methods in most of the experiments. The proposed methods can be utilized in emotion regulation therapy for emotion recognition in the future. The source code can be found at https://github.com/yi-ding-cs/TSception

View on arXiv
Comments on this paper