ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.13978
9
0

Bridging Speech Emotion Recognition and Personality: Dataset and Temporal Interaction Condition Network

20 May 2025
Yuan Gao
Hao Shi
Yahui Fu
Chenhui Chu
Tatsuya Kawahara
ArXivPDFHTML
Abstract

This study investigates the interaction between personality traits and emotional expression, exploring how personality information can improve speech emotion recognition (SER). We collected personality annotation for the IEMOCAP dataset, and the statistical analysis identified significant correlations between personality traits and emotional expressions. To extract finegrained personality features, we propose a temporal interaction condition network (TICN), in which personality features are integrated with Hubert-based acoustic features for SER. Experiments show that incorporating ground-truth personality traits significantly enhances valence recognition, improving the concordance correlation coefficient (CCC) from 0.698 to 0.785 compared to the baseline without personality information. For practical applications in dialogue systems where personality information about the user is unavailable, we develop a front-end module of automatic personality recognition. Using these automatically predicted traits as inputs to our proposed TICN model, we achieve a CCC of 0.776 for valence recognition, representing an 11.17% relative improvement over the baseline. These findings confirm the effectiveness of personality-aware SER and provide a solid foundation for further exploration in personality-aware speech processing applications.

View on arXiv
@article{gao2025_2505.13978,
  title={ Bridging Speech Emotion Recognition and Personality: Dataset and Temporal Interaction Condition Network },
  author={ Yuan Gao and Hao Shi and Yahui Fu and Chenhui Chu and Tatsuya Kawahara },
  journal={arXiv preprint arXiv:2505.13978},
  year={ 2025 }
}
Comments on this paper