ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2008.10726
  4. Cited By
Unsupervised Multi-Modal Representation Learning for Affective Computing
  with Multi-Corpus Wearable Data

Unsupervised Multi-Modal Representation Learning for Affective Computing with Multi-Corpus Wearable Data

24 August 2020
Kyle Ross
Paul Hungler
Ali Etemad
ArXivPDFHTML

Papers citing "Unsupervised Multi-Modal Representation Learning for Affective Computing with Multi-Corpus Wearable Data"

2 / 2 papers shown
Title
MVP: Multimodal Emotion Recognition based on Video and Physiological Signals
Valeriya Strizhkova
Hadi Kachmar
Hava Chaptoukaev
Raphael Kalandadze
Natia Kukhilava
...
Maria A. Zuluaga
Michal Balazia
A. Dantcheva
François Brémond
Laura M. Ferrari
41
0
0
06 Jan 2025
Transformer-Based Self-Supervised Learning for Emotion Recognition
Transformer-Based Self-Supervised Learning for Emotion Recognition
Juan Vazquez-Rodriguez
G. Lefebvre
Julien Cumin
James L. Crowley
10
24
0
08 Apr 2022
1