ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.07943
58
0

Enhancing Sentiment Analysis through Multimodal Fusion: A BERT-DINOv2 Approach

11 March 2025
Taoxu Zhao
Meisi Li
Kehao Chen
Liye Wang
Xucheng Zhou
Kunal Chaturvedi
Mukesh Prasad
Ali Anaissi
Ali Braytee
ArXivPDFHTML
Abstract

Multimodal sentiment analysis enhances conventional sentiment analysis, which traditionally relies solely on text, by incorporating information from different modalities such as images, text, and audio. This paper proposes a novel multimodal sentiment analysis architecture that integrates text and image data to provide a more comprehensive understanding of sentiments. For text feature extraction, we utilize BERT, a natural language processing model. For image feature extraction, we employ DINOv2, a vision-transformer-based model. The textual and visual latent features are integrated using proposed fusion techniques, namely the Basic Fusion Model, Self Attention Fusion Model, and Dual Attention Fusion Model. Experiments on three datasets, Memotion 7k dataset, MVSA single dataset, and MVSA multi dataset, demonstrate the viability and practicality of the proposed multimodal architecture.

View on arXiv
@article{zhao2025_2503.07943,
  title={ Enhancing Sentiment Analysis through Multimodal Fusion: A BERT-DINOv2 Approach },
  author={ Taoxu Zhao and Meisi Li and Kehao Chen and Liye Wang and Xucheng Zhou and Kunal Chaturvedi and Mukesh Prasad and Ali Anaissi and Ali Braytee },
  journal={arXiv preprint arXiv:2503.07943},
  year={ 2025 }
}
Comments on this paper