ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.09629
  4. Cited By
Modality-Transferable Emotion Embeddings for Low-Resource Multimodal
  Emotion Recognition

Modality-Transferable Emotion Embeddings for Low-Resource Multimodal Emotion Recognition

21 September 2020
Wenliang Dai
Zihan Liu
Tiezheng Yu
Pascale Fung
ArXivPDFHTML

Papers citing "Modality-Transferable Emotion Embeddings for Low-Resource Multimodal Emotion Recognition"

14 / 14 papers shown
Title
TACFN: Transformer-based Adaptive Cross-modal Fusion Network for Multimodal Emotion Recognition
TACFN: Transformer-based Adaptive Cross-modal Fusion Network for Multimodal Emotion Recognition
Feng Liu
Ziwang Fu
Yixuan Wang
Qijian Zheng
40
4
0
10 May 2025
Multimodal Emotion Recognition using Audio-Video Transformer Fusion with Cross Attention
Multimodal Emotion Recognition using Audio-Video Transformer Fusion with Cross Attention
Joe Dhanith
Shravan Venkatraman
Modigari Narendra
Vigya Sharma
Santhosh Malarvannan
81
0
0
20 Feb 2025
End-to-end Semantic-centric Video-based Multimodal Affective Computing
End-to-end Semantic-centric Video-based Multimodal Affective Computing
Ronghao Lin
Ying Zeng
Sijie Mai
Haifeng Hu
VGen
45
0
0
14 Aug 2024
Multimodal Information Bottleneck: Learning Minimal Sufficient Unimodal
  and Multimodal Representations
Multimodal Information Bottleneck: Learning Minimal Sufficient Unimodal and Multimodal Representations
Sijie Mai
Ying Zeng
Haifeng Hu
37
67
0
31 Oct 2022
FV2ES: A Fully End2End Multimodal System for Fast Yet Effective Video
  Emotion Recognition Inference
FV2ES: A Fully End2End Multimodal System for Fast Yet Effective Video Emotion Recognition Inference
Qinglan Wei
Xu-Juan Huang
Yuan Zhang
18
14
0
21 Sep 2022
An Efficient End-to-End Transformer with Progressive Tri-modal Attention
  for Multi-modal Emotion Recognition
An Efficient End-to-End Transformer with Progressive Tri-modal Attention for Multi-modal Emotion Recognition
Yang Wu
Pai Peng
Zhenyu Zhang
Yanyan Zhao
Bing Qin
27
1
0
20 Sep 2022
Kaggle Competition: Cantonese Audio-Visual Speech Recognition for In-car
  Commands
Kaggle Competition: Cantonese Audio-Visual Speech Recognition for In-car Commands
Wenliang Dai
Samuel Cahyawijaya
Tiezheng Yu
Elham J. Barezi
Pascale Fung
16
1
0
06 Jul 2022
COLD Fusion: Calibrated and Ordinal Latent Distribution Fusion for
  Uncertainty-Aware Multimodal Emotion Recognition
COLD Fusion: Calibrated and Ordinal Latent Distribution Fusion for Uncertainty-Aware Multimodal Emotion Recognition
M. Tellamekala
Shahin Amiriparian
Björn W. Schuller
Elisabeth André
T. Giesbrecht
M. Valstar
26
25
0
12 Jun 2022
One Country, 700+ Languages: NLP Challenges for Underrepresented
  Languages and Dialects in Indonesia
One Country, 700+ Languages: NLP Challenges for Underrepresented Languages and Dialects in Indonesia
Alham Fikri Aji
Genta Indra Winata
Fajri Koto
Samuel Cahyawijaya
Ade Romadhony
...
David Moeljadi
Radityo Eko Prasojo
Timothy Baldwin
Jey Han Lau
Sebastian Ruder
40
99
0
24 Mar 2022
CI-AVSR: A Cantonese Audio-Visual Speech Dataset for In-car Command
  Recognition
CI-AVSR: A Cantonese Audio-Visual Speech Dataset for In-car Command Recognition
Wenliang Dai
Samuel Cahyawijaya
Tiezheng Yu
Elham J. Barezi
Peng-Tao Xu
...
Genta Indra Winata
Qifeng Chen
Xiaojuan Ma
Bertram E. Shi
Pascale Fung
41
11
0
11 Jan 2022
LMR-CBT: Learning Modality-fused Representations with CB-Transformer for
  Multimodal Emotion Recognition from Unaligned Multimodal Sequences
LMR-CBT: Learning Modality-fused Representations with CB-Transformer for Multimodal Emotion Recognition from Unaligned Multimodal Sequences
Ziwang Fu
Feng Liu
Hanyang Wang
Siyuan Shen
Jiahao Zhang
Jiayin Qi
Xiangling Fu
Aimin Zhou
34
9
0
03 Dec 2021
Vision Guided Generative Pre-trained Language Models for Multimodal
  Abstractive Summarization
Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization
Tiezheng Yu
Wenliang Dai
Zihan Liu
Pascale Fung
32
73
0
06 Sep 2021
Weakly-supervised Multi-task Learning for Multimodal Affect Recognition
Weakly-supervised Multi-task Learning for Multimodal Affect Recognition
Wenliang Dai
Samuel Cahyawijaya
Yejin Bang
Pascale Fung
CVBM
30
11
0
23 Apr 2021
MLQA: Evaluating Cross-lingual Extractive Question Answering
MLQA: Evaluating Cross-lingual Extractive Question Answering
Patrick Lewis
Barlas Oğuz
Ruty Rinott
Sebastian Riedel
Holger Schwenk
ELM
246
492
0
16 Oct 2019
1