ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2208.11450
  4. Cited By
VISTANet: VIsual Spoken Textual Additive Net for Interpretable
  Multimodal Emotion Recognition

VISTANet: VIsual Spoken Textual Additive Net for Interpretable Multimodal Emotion Recognition

24 August 2022
Puneet Kumar
Sarthak Malik
Balasubramanian Raman
Xiaobai Li
ArXivPDFHTML

Papers citing "VISTANet: VIsual Spoken Textual Additive Net for Interpretable Multimodal Emotion Recognition"

2 / 2 papers shown
Title
VisioPhysioENet: Multimodal Engagement Detection using Visual and Physiological Signals
VisioPhysioENet: Multimodal Engagement Detection using Visual and Physiological Signals
Alakhsimar Singh
Nischay Verma
Kanav Goyal
Amritpal Singh
Puneet Kumar
Xiaobai Li
41
1
0
24 Sep 2024
An AutoML-based Approach to Multimodal Image Sentiment Analysis
An AutoML-based Approach to Multimodal Image Sentiment Analysis
Vasco Lopes
António Gaspar
Luís A. Alexandre
João Paulo Cordeiro
54
23
0
16 Feb 2021
1