ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.12674
  4. Cited By
EmoTech: A Multi-modal Speech Emotion Recognition Using Multi-source Low-level Information with Hybrid Recurrent Network

EmoTech: A Multi-modal Speech Emotion Recognition Using Multi-source Low-level Information with Hybrid Recurrent Network

22 January 2025
Shamin Bin Habib Avro
Taieba Taher
Nursadul Mamun
ArXivPDFHTML

Papers citing "EmoTech: A Multi-modal Speech Emotion Recognition Using Multi-source Low-level Information with Hybrid Recurrent Network"

1 / 1 papers shown
Title
EmoFormer: A Text-Independent Speech Emotion Recognition using a Hybrid Transformer-CNN model
EmoFormer: A Text-Independent Speech Emotion Recognition using a Hybrid Transformer-CNN model
Rashedul Hasan
Meher Nigar
Nursadul Mamun
Sayan Paul
38
1
0
22 Jan 2025
1