12
0

Explainable speech emotion recognition through attentive pooling: insights from attention-based temporal localization

Main:4 Pages
5 Figures
Bibliography:1 Pages
2 Tables
Abstract

State-of-the-art transformer models for Speech Emotion Recognition (SER) rely on temporal feature aggregation, yet advanced pooling methods remain underexplored. We systematically benchmark pooling strategies, including Multi-Query Multi-Head Attentive Statistics Pooling, which achieves a 3.5 percentage point macro F1 gain over average pooling. Attention analysis shows 15 percent of frames capture 80 percent of emotion cues, revealing a localized pattern of emotional information. Analysis of high-attention frames reveals that non-linguistic vocalizations and hyperarticulated phonemes are disproportionately prioritized during pooling, mirroring human perceptual strategies. Our findings position attentive pooling as both a performant SER mechanism and a biologically plausible tool for explainable emotion localization. On Interspeech 2025 Speech Emotion Recognition in Naturalistic Conditions Challenge, our approach obtained a macro F1 score of 0.3649.

View on arXiv
@article{leygue2025_2506.15754,
  title={ Explainable speech emotion recognition through attentive pooling: insights from attention-based temporal localization },
  author={ Tahitoa Leygue and Astrid Sabourin and Christian Bolzmacher and Sylvain Bouchigny and Margarita Anastassova and Quoc-Cuong Pham },
  journal={arXiv preprint arXiv:2506.15754},
  year={ 2025 }
}
Comments on this paper