Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2208.11450
Cited By
VISTANet: VIsual Spoken Textual Additive Net for Interpretable Multimodal Emotion Recognition
24 August 2022
Puneet Kumar
Sarthak Malik
Balasubramanian Raman
Xiaobai Li
Re-assign community
ArXiv
PDF
HTML
Papers citing
"VISTANet: VIsual Spoken Textual Additive Net for Interpretable Multimodal Emotion Recognition"
2 / 2 papers shown
Title
VisioPhysioENet: Multimodal Engagement Detection using Visual and Physiological Signals
Alakhsimar Singh
Nischay Verma
Kanav Goyal
Amritpal Singh
Puneet Kumar
Xiaobai Li
41
1
0
24 Sep 2024
An AutoML-based Approach to Multimodal Image Sentiment Analysis
Vasco Lopes
António Gaspar
Luís A. Alexandre
João Paulo Cordeiro
54
23
0
16 Feb 2021
1