ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.19373
  4. Cited By
Multi-modal Mood Reader: Pre-trained Model Empowers Cross-Subject
  Emotion Recognition

Multi-modal Mood Reader: Pre-trained Model Empowers Cross-Subject Emotion Recognition

28 May 2024
Yihang Dong
Xuhang Chen
Yanyan Shen
Michael Kwok-Po Ng
Tao Qian
Shuqiang Wang
ArXivPDFHTML

Papers citing "Multi-modal Mood Reader: Pre-trained Model Empowers Cross-Subject Emotion Recognition"

2 / 2 papers shown
Title
DocDeshadower: Frequency-aware Transformer for Document Shadow Removal
DocDeshadower: Frequency-aware Transformer for Document Shadow Removal
Shenghong Luo
Ruifeng Xu
Xuhang Chen
Zinuo Li
Chi-Man Pun
Shuqiang Wang
ViT
41
7
0
28 Jul 2023
HetEmotionNet: Two-Stream Heterogeneous Graph Recurrent Neural Network
  for Multi-modal Emotion Recognition
HetEmotionNet: Two-Stream Heterogeneous Graph Recurrent Neural Network for Multi-modal Emotion Recognition
Ziyu Jia
Youfang Lin
Jing Wang
Zhiyang Feng
Xiangheng Xie
Caijie Chen
48
81
0
07 Aug 2021
1