ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.10575
17
0

Robust Emotion Recognition via Bi-Level Self-Supervised Continual Learning

13 May 2025
Adnan Ahmad
Bahareh Nakisa
Mohammad Naim Rastgoo
ArXivPDFHTML
Abstract

Emotion recognition through physiological signals such as electroencephalogram (EEG) has become an essential aspect of affective computing and provides an objective way to capture human emotions. However, physiological data characterized by cross-subject variability and noisy labels hinder the performance of emotion recognition models. Existing domain adaptation and continual learning methods struggle to address these issues, especially under realistic conditions where data is continuously streamed and unlabeled. To overcome these limitations, we propose a novel bi-level self-supervised continual learning framework, SSOCL, based on a dynamic memory buffer. This bi-level architecture iteratively refines the dynamic buffer and pseudo-label assignments to effectively retain representative samples, enabling generalization from continuous, unlabeled physiological data streams for emotion recognition. The assigned pseudo-labels are subsequently leveraged for accurate emotion prediction. Key components of the framework, including a fast adaptation module and a cluster-mapping module, enable robust learning and effective handling of evolving data streams. Experimental validation on two mainstream EEG tasks demonstrates the framework's ability to adapt to continuous data streams while maintaining strong generalization across subjects, outperforming existing approaches.

View on arXiv
@article{ahmad2025_2505.10575,
  title={ Robust Emotion Recognition via Bi-Level Self-Supervised Continual Learning },
  author={ Adnan Ahmad and Bahareh Nakisa and Mohammad Naim Rastgoo },
  journal={arXiv preprint arXiv:2505.10575},
  year={ 2025 }
}
Comments on this paper